Control and Code: Privacy Online: Difference between revisions
Line 29: | Line 29: | ||
April 3: Control and Code: Privacy Online | April 3: Control and Code: Privacy Online | ||
[[User:Just Johnny|Just Johnny]] 17:12, 15 February 2012 (UTC) | [[User:Just Johnny|Just Johnny]] 17:12, 15 February 2012 (UTC) | ||
This NYTimes article about surveillance over a variety of technological mediums in Great Britain could easily be another piece of HW for tomorrow's class if anyone is interested: http://www.nytimes.com/2012/04/03/world/europe/british-government-eavesdropping-plans-draw-protest.html?hp | |||
Interesting about Hotspot Shield, I definitely was one of the people who used it and got the impression it was private without actually noticing what it allowed AnchorFree to track. On the other hand, I'm not at all surprised by the level of intentionally misleading speech Google employs to explain its (lack of) privacy protections by taking some extremely literal approaches to what they do or don't collect. If you have all of the components of a bomb and the ability to create it, it is a little misleading to say "I do not have a bomb in my possession in any way." I doubt the police would agree with this literally correct statement. That's what Google is doing when it says it doesn't collect personal info... it just collects all of the resources needed to immediately extrapolate that personal info, which it may or may not do any time it pleases. | |||
There is still always the problem of information overload: it's no longer what info you can collect (since, as Google shows, you can get basically anything from the average user), but rather how good you are at searching and parsing it into something useful. There is also the issue that, like we discussed with the value of immediacy over accuracy in news reporting through Twitter, it is quite possible for people with good intentions to ruin someone's privacy and safety through a rush to judgement. Look at the Trayvon case, where someone (I think it was Spike Lee?) tweeted what he though was the home address of Trayvon's killer and it ended up being the residence of an older couple who had to leave in fear for their lives. When everything is accessible, massive mistakes can be made in the space of a keystroke, and cannot be undone so easily. | |||
I worry about the word "consent" in terms of the information we share through our technology nowadays. We lose a right to privacy when we intentionally share information with the public; we consent to have that data known. But how many people understand what they are sharing by having a smartphone/GPS in their pocket 24/7? Is the fine print in the cell phone contract enough to count as consent? What about the location tags if I post to Facebook from my phone? How do we measure the level of understanding an individual has of what their technology is broadcasting about them and decide if it counted as "informed consent?" | |||
== Links == | == Links == |
Revision as of 14:00, 2 April 2012
April 3
Code is law; the architecture of the Internet and the software that runs on it will determine to a large extent how the Net is regulated in a way that goes far deeper than legal means could ever achieve (or at least ever achieve alone). Technological advances have also produced many tempting options for regulation and surveillance that may severely alter the balance of privacy, access to information and sharing of intellectual property. By regulating behavior, technological architectures or codes embed different values and political choices. Yet code is often treated as a technocratic affair, or something best left to private economic actors pursuing their own interests. If code is law, then control of code is power. If important questions of social ordering are at stake, shouldn't the design and development of code be brought within the political process? In this class we delve into the technological alternatives that will shape interactions over the Internet, as well as the implications of each on personal freedom, privacy and combating cyber-crime.
Readings
- John Palfrey and Hal Roberts, The EU Data Retention Directive in an Era of Internet Surveillance
- Abelson, Ledeen, Lewis, Blown to Bits, Chapter 2: Naked in the Sunlight: Privacy Lost, Privacy Abandoned
- Jonathan Zittrain, Future of the Internet, Chapter 9: Privacy 2.0
- Warren and Brandeis, The Right to Privacy
Optional Readings
- "Making Sense of Privacy and Publicity." Transcript of talk given by Danah Boyd at SXSW. Austin, Texas, March 13, 2010
- Solveig Singleton, Privacy as Censorship (CATO)
- Lawrence Lessig, Code 2.0: Privacy
- http://paranoia.dubfire.net/2009/12/8-million-reasons-for-real-surveillance.html
- Narayanan and Shmatikov, How To Break Anonymity of the Netflix Prize Dataset
- Brin and Page, The Anatomy of a Large-Scale Hypertextual Web Search Engine
- Noam Cohen, It’s Tracking Your Every Move and You May Not Even Know (NYTimes, March 26, 2011)
- http://en.wikipedia.org/wiki/Human_flesh_search_engine
Class Discussion
April 3: Control and Code: Privacy Online Just Johnny 17:12, 15 February 2012 (UTC)
This NYTimes article about surveillance over a variety of technological mediums in Great Britain could easily be another piece of HW for tomorrow's class if anyone is interested: http://www.nytimes.com/2012/04/03/world/europe/british-government-eavesdropping-plans-draw-protest.html?hp
Interesting about Hotspot Shield, I definitely was one of the people who used it and got the impression it was private without actually noticing what it allowed AnchorFree to track. On the other hand, I'm not at all surprised by the level of intentionally misleading speech Google employs to explain its (lack of) privacy protections by taking some extremely literal approaches to what they do or don't collect. If you have all of the components of a bomb and the ability to create it, it is a little misleading to say "I do not have a bomb in my possession in any way." I doubt the police would agree with this literally correct statement. That's what Google is doing when it says it doesn't collect personal info... it just collects all of the resources needed to immediately extrapolate that personal info, which it may or may not do any time it pleases.
There is still always the problem of information overload: it's no longer what info you can collect (since, as Google shows, you can get basically anything from the average user), but rather how good you are at searching and parsing it into something useful. There is also the issue that, like we discussed with the value of immediacy over accuracy in news reporting through Twitter, it is quite possible for people with good intentions to ruin someone's privacy and safety through a rush to judgement. Look at the Trayvon case, where someone (I think it was Spike Lee?) tweeted what he though was the home address of Trayvon's killer and it ended up being the residence of an older couple who had to leave in fear for their lives. When everything is accessible, massive mistakes can be made in the space of a keystroke, and cannot be undone so easily.
I worry about the word "consent" in terms of the information we share through our technology nowadays. We lose a right to privacy when we intentionally share information with the public; we consent to have that data known. But how many people understand what they are sharing by having a smartphone/GPS in their pocket 24/7? Is the fine print in the cell phone contract enough to count as consent? What about the location tags if I post to Facebook from my phone? How do we measure the level of understanding an individual has of what their technology is broadcasting about them and decide if it counted as "informed consent?"