Pre-class Discussion for Jan 15

From Cyberlaw: Internet Points of Control Course Wiki
Jump to navigation Jump to search

Questions for Chris Kelly

  • (from Question tool): With Facebook opening up to anyone (instead of being closed to certain (i.e. educational) networks), what are you doing to avoid/mitigate some of the privacy and security issues that have been encountered on MySpace?
  • (from Question tool): Is facebook's commercial strengths/strategy lying more in providing the best features/development platform to attract users or aggregating/analyzing the most information about users for better advertising monetization?
  • How does FB's international expansion affect its privacy policy? Is the policy consistent across all users, or do you cater to different markets? Cjohnson 21:42, 14 January 2008 (EST)
  • How did the community reaction to the newsfeed change the approach you took to the social ads rollout? What (if anything) will you do differently in the future, based on these experiences? -Lciaccio 10:48, 15 January 2008 (EST)
  • Was the Beacon rollout really a mistake? Had Facebook implemented Beacon from the start as an opt-out system, people still would have complained and Facebook would have had to roll back the program further (possibly to an opt-in system) or risk appearing intransigent. But by "implementing first, apologizing later," Facebook gets more of what it wants (an opt-out system instead of an opt-in), it gets to appear responsive to user concerns, and (from what I've seen) its reputation only suffered a glancing blow. JoshuaFeasel 10:56, 15 January 2008 (EST)
    • I realize now that this question could be taken two ways: (1) Was the Beacon rollout better for Facebook in the end, despite the outcry over privacy? and (2) Did Facebook plan the Beacon rollout this way, in light of the advantages stated above? I'm interested in both, but let's stick with the first question, as the second question makes me look a little paranoid. JoshuaFeasel 11:01, 15 January 2008 (EST)


Firefox Workaround to Block Facebook Beacon

Here's a workaround for Firefox that I received a few days into this drama that lets you block Facebook beacon. Jumpingdeeps 10:15, 13 January 2008 (EST)

  • I'm pretty sure that now there is a global opt-out option for this in the Facebook privacy settings, but according to one of our articles for today, even with this setting turned on, "information about your habits on these third party sites are still sent along with your e-mail address to Facebook." I'm not sure if this workaround blocks these as well? --NikaE 20:29, 14 January 2008 (EST)
    • The workaround website seems to suggest that you are preventing the partner site from sending requests to Facebook beacon, meaning Facebook actually doesn't get any information about you. Now I'm sure Facebook can get around this by changing/making opaque how these requests are sent to it...but that would be evil! (j/k) Jumpingdeeps 20:54, 14 January 2008 (EST)

Future of the Internet, Chapter 9

  • In his article, Singleton suggests that citizens who hope to escape gossip in small neighborhoods can move to large metropolitan areas where they could enjoy anonymity in the crowds. However, Prof. Zittrain points out that as facial recognition software becomes increasingly powerful, and storage costs are becoming increasingly negligible, tourists' photographs can suddenly become a way to easily discover who was where, when. As these problems increase, perhaps the only solution is to take the opposite of Singleton's advice and move back to an uncrowded rural area. Are those who crave privacy hopelessly caught between a rock and a hard place? --NikaE 21:13, 14 January 2008 (EST)
    • Professor Zittrain’s article reminds me of an argument Professor Weinreb once made in class. We were talking about the Fourth Amendment, and Professor Weinreb noted that our understanding of privacy is different in an era of ubiquitous cell phones. Our parents used phone booths, counting on a walled-off space to create a legitimate expectation of privacy while out in public. Now, people have many of the same conversations out in the open, relying instead on the notion of anonymity to preserve their privacy. The other day, someone was out in public relaying the details of a tragilarious date. Certainly, she risked a loss of privacy had I (or anyone around her) been able to personally identify her; nevertheless, the conversation was still in some sense private so long as no one could identify her. In some ways, I don’t think the Privacy 2.0 problem is qualitatively different; rather, the generativity of the internet is mostly only exacerbating the degree to which it is problem. Anonymity in public is hardly ever certain, but you can at least have a reasonable expectation of anonymity outside those places physically frequented by people you know. Mashups allow identification by people not physically present at the time a photo or video is taken, changing the likelihood of identification. I think it is this loss of anonymity, rather than the idea of judging actions in public, that bothers me more.
  • Professor Zittrain discusses the possibility of reputation systems moving from cyberspace to the real world and notes its existence in "embryonic form" with "How's My Driving" programs. Therein, Professor Strahilevitz asserts that "How's My Driving" should be extended to citizens. Although I agree that implementing this technology would most likely reduce accidents and increase accountability, I think that Prof. Strahilevitz overestimates honest reporting. He mentions the potential for abuse, however, he does not focus on the threat that such abusive behavior poses. As mentioned, racism, political views, and other malicious motivations may encourage drivers to give another driver a "thumbs-down." If this reputation system is adopted, the government would have to figure out a mechanism of differentiating bad from honest reporting. I think that the political, racist, and other concerns are greater than Prof. Strahilevitz anticipates. Although I have yet to read his article in its entirety and my knowledge of his suggested program is limited to Prof. Zittrain's chapter, I cannot see how this system could feasibly work without punishments for false reports. One would have to take into account teenagers acting mischievously and aggressive drivers, among other individuals who would quickly report a "thumbs-down" rating. I understand that on the internet programs that rate sellers (i.e. e-Bay) and friends (one of the social networks) may have positive results, but would the government want to experiment similarly with someone's driving record? As seen on sites, such as dontdatehimgirl.com, people with bad intentions may compromise such reputation systems. KStanfield 01:17, 15 January 2008 (EST)
    • Are we giving too little credit to humanity? Malicious/false reporting could occur on eBay (or any other reputational system.) Granted the risk of abuse could be higher for "How's My Driving", but perhaps it's a non-issue. Cjohnson 10:12, 15 January 2008 (EST)
  • Zittrain hints that later in his book, he will address the potential (or lack thereof) to "sift out what we might judge to be bad generative results from good ones..." I'm curious what 'points of control' exist in Privacy 2.0 Cjohnson 10:12, 15 January 2008 (EST)
  • What happens when Privacy 1.0 and Privacy 2.0 collide? That is, when the government or corporations begin to utilize the "digital dossiers" compiled/produced by the masses? Zittrain's chapter seems to focus on end-users making use of the end-user generated dossiers, and the privacy concerns that result. Another implication could be an enhancement of Solove's concerns about private corporations invading privacy - web 2.0 allows them to costlessly outsource the compilation of personal data. Cjohnson 10:12, 15 January 2008 (EST)

Facebook Responds to Privacy Concerns

  • I think one of the most important points to take away from this is that default settings matter. When the default on Facebook was to have Beacon as an opt-out service, there were many stories of users who hadn't even noticed pop-up windows asking whether they wanted their purchase to be reported on Facebook. As a result, some holiday presents that users had bought for their friends appeared in those same friends' Facebook news feeds. Thus, the opt-out default turned out to cause real problems. Even though Facebook has changed many of their policies regarding Beacon, in order to entirely opt out of the service one must affirmatively change their Facebook settings and state that they never want their activity on outside websites reported to Facebook. I get the sense that if Beacon was introduced entirely as an opt-in feature, almost no average users would change their privacy settings to allow outside websites to sometimes post their behavior on Facebook. Clearly, default settings matter. --NikaE 20:15, 14 January 2008 (EST)
  • Perhaps I do not understand how Beacon technology works, but it seems odd to me that all the attention/criticism is directed at Facebook. Is it not the third-party site (blockbuster.com, overstock.com, etc.) that is transmitting the "private" information? Aren't these sites the ones that are violating a customer's confidence, by sending data to FB? Cjohnson 10:19, 15 January 2008 (EST)

Privacy as Censorship

Government vs. Private Companies

  • This article makes the claim that government databases pose much greater threats to the public than private databases. However, in Prof. Zittrain's book, he points out that the Privacy Act of 1974 applied a set of fair information practices to government agencies' records, but "Congress never enacted a comparable comprehensive regulatory scheme for private databases." Even if government records pose the potential for more harm to the public, could the lack of regulation of private databases in reality pose a greater threat? --NikaE 20:19, 14 January 2008 (EST)
  • Singleton argues that government databases pose a greater threat because the potential of abuse. He lists acts by government employees who have access to the databases and thereafter use the information to commit crimes. However, he asserts that abusive behavior by private database employees should not lead to the elimination of private databases, since punishments exists and the companies would be responsible for their employees behavior. Is the the abuse of the government databases therefore unique? KStanfield 00:13, 15 January 2008 (EST)

Free Speech

  • Singleton is very concerned with free speech, and worries that private institutions' right to free speech will be stifled if they are not allowed to freely share database information with third parties. However, isn't there a very real chance that free expression among the general public can be stifled if their information is indiscrimanately shared among third parties? For example, a person may be less willing to sign a petition if they worry that their contact information will be sold to telemarketers. Hasn't this person's free speech right been infringed upon? Are we willing to sacrifice this right in order to preserve the right of companies to exchange database information? --NikaE 20:25, 14 January 2008 (EST)
  • Pardon my haven't-taken-a-class-on-it-so-am-confused question: Do corporations and businesses have free speech rights? Wikipedia weighs in on Corporate Personhood. Jumpingdeeps 20:40, 14 January 2008 (EST)

Other Privacy Concerns

  • My biggest concern with this article, though, was that it didn't seem to ever really address the problems of databases that are perfectly indexed and searchable, even if they're not available to the general public. The article kind of skated over this, stating that even if "Mrs. Jones" lives in a small town, "the employees of the creator of the database usually will not live anywhere close by." If even one employee does, and is even slightly curious about Mrs. Jones, he can easily find out a lot of things about her simply by using his access to the database. Furthermore, private organizations often inadvertently make private data available (Prof. Zittrain points to several banks and credit card agencies that have done so), and once this happens, the easy searchability of the database becomes a really big problem. Ultimately, a database is more than just discreet gossip about individuals. Information can be easily picked out of it by those who have interest in that information. For example, there has been a rumor (I'm not sure if it's substantiated or not) that Facebook employees can see which users view other users' profiles. Operating under the assumption that this is true, for the great majority of Facebook users, who do not know Facebook employees, this is not really of any consequence to them. However, for the few users that employees take an interest in, this could be very concerning. Furthermore, employees presumably don't have to sort through every log of every user, but are able to find what they want through a simple, automated search. The searchability of databases, not the databases themselves, are more concerning from a privacy perspective, and I'm not sure that Singleton takes this argument on squarely. --NikaE 20:46, 14 January 2008 (EST)
  • The author discusses the way we collect information on each other outside of the internet, yet ignores the extent to which social norms in effect prevent the sharing of that information. Does the absense of personal relationships with customers mean that we need to add regulation to stand in the stead of social norms for e-commerce?-Lciaccio 10:57, 15 January 2008 (EST)

Reputation Economies

  • Prof. Citron's piece illustrates the problems with both more anonymity, and with less. Some users feel the need to be somewhat anonymous online in order to protect themselves from attacks in both cyberspace and the real world. Bloggers had their home addresses and social security numbers revealed. This would certainly indicate that some databases are too accessible to wrongdoers, and that individuals, corporations, and the government all need to be more vigilant about protecting information. However, Citron also points out that "groups attack less when their members are afraid of getting caught." One of the more obvious solutions to this problem would be requiring valid email addresses to non-anonymous email accounts (such as those associated with a company or university) to be associated with the accounts of people who wished to post on blogs and websites. However, the framers might find this objectionable: as Prof. Zittrain points out, they "embraced anonymous speech in the political sphere." Perhaps another solution would be to, in the context of message boards like AutoAdmit, indemnify the operators of message boards if they required valid, non-anonymous (perhaps .edu?) email addresses to be associated with user accounts. What are some other possible solutions to these problems? --NikaE 21:04, 14 January 2008 (EST)
  • Should cyber-harassment be a new crime, or is it adequately addressed under more traditional doctrines? Making serious death threats is a criminal offense, but posting a person's phone number and social security number probably isn't even a misdemeanor. Would this be more properly addressed ads speech meant to incite violence? How would you go about separating a legitimate intent to cause an individual harm from a mean-spirited joke based solely on an online posting? Anna 22:12, 14 January 2008 (EST)

Demos

Who's a Rat

Gawker Stalker

  • According to the ever-reliable Page Six, and re-reported in Gawker, George Clooney and/or his publicist circulated a plan to flood GawkerStalker with "fake" sightings to render the site worthless. Lk37 01:49, 15 January 2008 (EST)

RapLeaf

  • At the risk of sounding paranoid, I wonder if simply searching for my reputation via my email address submits my email to some database that RapLeaf proceeds to spam? Take a look at their own Privacy Policy.
"We may send information to members and nonmembers by email about our services, including special alerts, service-related announcements, offers, awards, surveys, contests, promotions and updates. Members and nonmembers will be given the option not to receive these types of communications by clicking on the unsubscribe link in the footer of any Rapleaf email." (emphasis added)
What a drag. Jumpingdeeps 21:09, 14 January 2008 (EST)