Control and Code: Privacy Online

From Technologies of Politics and Control
Jump to navigation Jump to search

April 3

Code is law; the architecture of the Internet and the software that runs on it will determine to a large extent how the Net is regulated in a way that goes far deeper than legal means could ever achieve (or at least ever achieve alone). Technological advances have also produced many tempting options for regulation and surveillance that may severely alter the balance of privacy, access to information and sharing of intellectual property. By regulating behavior, technological architectures or codes embed different values and political choices. Yet code is often treated as a technocratic affair, or something best left to private economic actors pursuing their own interests. If code is law, then control of code is power. If important questions of social ordering are at stake, shouldn't the design and development of code be brought within the political process? In this class we delve into the technological alternatives that will shape interactions over the Internet, as well as the implications of each on personal freedom, privacy and combating cyber-crime.


Optional Readings

Class Discussion

April 3: Control and Code: Privacy Online Just Johnny 17:12, 15 February 2012 (UTC)

This NYTimes article about surveillance over a variety of technological mediums in Great Britain could easily be another piece of HW for tomorrow's class if anyone is interested:

Interesting about Hotspot Shield, I definitely was one of the people who used it and got the impression it was private without actually noticing what it allowed AnchorFree to track. On the other hand, I'm not at all surprised by the level of intentionally misleading speech Google employs to explain its (lack of) privacy protections by taking some extremely literal approaches to what they do or don't collect. If you have all of the components of a bomb and the ability to create it, it is a little misleading to say "I do not have a bomb in my possession in any way." I doubt the police would agree with this literally correct statement. That's what Google is doing when it says it doesn't collect personal info... it just collects all of the resources needed to immediately extrapolate that personal info, which it may or may not do any time it pleases.

There is still always the problem of information overload: it's no longer what info you can collect (since, as Google shows, you can get basically anything from the average user), but rather how good you are at searching and parsing it into something useful. There is also the issue that, like we discussed with the value of immediacy over accuracy in news reporting through Twitter, it is quite possible for people with good intentions to ruin someone's privacy and safety through a rush to judgement. Look at the Trayvon case, where someone (I think it was Spike Lee?) tweeted what he though was the home address of Trayvon's killer and it ended up being the residence of an older couple who had to leave in fear for their lives. When everything is accessible, massive mistakes can be made in the space of a keystroke, and cannot be undone so easily.

I worry about the word "consent" in terms of the information we share through our technology nowadays. We lose a right to privacy when we intentionally share information with the public; we consent to have that data known. But how many people understand what they are sharing by having a smartphone/GPS in their pocket 24/7? Is the fine print in the cell phone contract enough to count as consent? What about the location tags if I post to Facebook from my phone? How do we measure the level of understanding an individual has of what their technology is broadcasting about them and decide if it counted as "informed consent?"