Pre-class Discussion for Jan 15

From Cyberlaw: Internet Points of Control Course Wiki
Revision as of 14:41, 15 January 2008 by Lciaccio (talk | contribs) (→‎Reputation Economies)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Questions for Chris Kelly

  • (from Question tool): With Facebook opening up to anyone (instead of being closed to certain (i.e. educational) networks), what are you doing to avoid/mitigate some of the privacy and security issues that have been encountered on MySpace?
  • (from Question tool): Is facebook's commercial strengths/strategy lying more in providing the best features/development platform to attract users or aggregating/analyzing the most information about users for better advertising monetization?
  • How does FB's international expansion affect its privacy policy? Is the policy consistent across all users, or do you cater to different markets? Cjohnson 21:42, 14 January 2008 (EST)
  • How did the community reaction to the newsfeed change the approach you took to the social ads rollout? What (if anything) will you do differently in the future, based on these experiences? -Lciaccio 10:48, 15 January 2008 (EST)
  • Was the Beacon rollout really a mistake? Had Facebook implemented Beacon from the start as an opt-out system, people still would have complained and Facebook would have had to roll back the program further (possibly to an opt-in system) or risk appearing intransigent. But by "implementing first, apologizing later," Facebook gets more of what it wants (an opt-out system instead of an opt-in), it gets to appear responsive to user concerns, and (from what I've seen) its reputation only suffered a glancing blow. JoshuaFeasel 10:56, 15 January 2008 (EST)
    • I realize now that this question could be taken two ways: (1) Was the Beacon rollout better for Facebook in the end, despite the outcry over privacy? and (2) Did Facebook plan the Beacon rollout this way, in light of the advantages stated above? I'm interested in both, but let's stick with the first question, as the second question makes me look a little paranoid. JoshuaFeasel 11:01, 15 January 2008 (EST)
  • Does Facebook retain information that users have removed from their profiles? If so, how is that information stored, and what is the purpose of retaining it? Dankahn 11:29, 15 January 2008 (EST)

Firefox Workaround to Block Facebook Beacon

Here's a workaround for Firefox that I received a few days into this drama that lets you block Facebook beacon. Jumpingdeeps 10:15, 13 January 2008 (EST)

  • I'm pretty sure that now there is a global opt-out option for this in the Facebook privacy settings, but according to one of our articles for today, even with this setting turned on, "information about your habits on these third party sites are still sent along with your e-mail address to Facebook." I'm not sure if this workaround blocks these as well? --NikaE 20:29, 14 January 2008 (EST)
    • The workaround website seems to suggest that you are preventing the partner site from sending requests to Facebook beacon, meaning Facebook actually doesn't get any information about you. Now I'm sure Facebook can get around this by changing/making opaque how these requests are sent to it...but that would be evil! (j/k) Jumpingdeeps 20:54, 14 January 2008 (EST)

Future of the Internet, Chapter 9

  • In his article, Singleton suggests that citizens who hope to escape gossip in small neighborhoods can move to large metropolitan areas where they could enjoy anonymity in the crowds. However, Prof. Zittrain points out that as facial recognition software becomes increasingly powerful, and storage costs are becoming increasingly negligible, tourists' photographs can suddenly become a way to easily discover who was where, when. As these problems increase, perhaps the only solution is to take the opposite of Singleton's advice and move back to an uncrowded rural area. Are those who crave privacy hopelessly caught between a rock and a hard place? --NikaE 21:13, 14 January 2008 (EST)
    • Professor Zittrain’s article reminds me of an argument Professor Weinreb once made in class. We were talking about the Fourth Amendment, and Professor Weinreb noted that our understanding of privacy is different in an era of ubiquitous cell phones. Our parents used phone booths, counting on a walled-off space to create a legitimate expectation of privacy while out in public. Now, people have many of the same conversations out in the open, relying instead on the notion of anonymity to preserve their privacy. The other day, someone was out in public relaying the details of a tragilarious date. Certainly, she risked a loss of privacy had I (or anyone around her) been able to personally identify her; nevertheless, the conversation was still in some sense private so long as no one could identify her. In some ways, I don’t think the Privacy 2.0 problem is qualitatively different; rather, the generativity of the internet is mostly only exacerbating the degree to which it is problem. Anonymity in public is hardly ever certain, but you can at least have a reasonable expectation of anonymity outside those places physically frequented by people you know. Mashups allow identification by people not physically present at the time a photo or video is taken, changing the likelihood of identification. I think it is this loss of anonymity, rather than the idea of judging actions in public, that bothers me more.
  • Professor Zittrain discusses the possibility of reputation systems moving from cyberspace to the real world and notes its existence in "embryonic form" with "How's My Driving" programs. Therein, Professor Strahilevitz asserts that "How's My Driving" should be extended to citizens. Although I agree that implementing this technology would most likely reduce accidents and increase accountability, I think that Prof. Strahilevitz overestimates honest reporting. He mentions the potential for abuse, however, he does not focus on the threat that such abusive behavior poses. As mentioned, racism, political views, and other malicious motivations may encourage drivers to give another driver a "thumbs-down." If this reputation system is adopted, the government would have to figure out a mechanism of differentiating bad from honest reporting. I think that the political, racist, and other concerns are greater than Prof. Strahilevitz anticipates. Although I have yet to read his article in its entirety and my knowledge of his suggested program is limited to Prof. Zittrain's chapter, I cannot see how this system could feasibly work without punishments for false reports. One would have to take into account teenagers acting mischievously and aggressive drivers, among other individuals who would quickly report a "thumbs-down" rating. I understand that on the internet programs that rate sellers (i.e. e-Bay) and friends (one of the social networks) may have positive results, but would the government want to experiment similarly with someone's driving record? As seen on sites, such as dontdatehimgirl.com, people with bad intentions may compromise such reputation systems. KStanfield 01:17, 15 January 2008 (EST)
    • Are we giving too little credit to humanity? Malicious/false reporting could occur on eBay (or any other reputational system.) Granted the risk of abuse could be higher for "How's My Driving", but perhaps it's a non-issue. Cjohnson 10:12, 15 January 2008 (EST)
    • My concern with something like a "How's My Driving?" system is that negative experiences would probably be over-reported. If you go on to a site like tripadvisor.com, most of the reviews are either glowing or seething. Few people who have a middle-of-the-road experience feel compelled to go rate their experience. This would probably also be true of driving. Only really good or really bad experiences would make people bother to rate other drivers. But what constitutes a "really good" experience with another driver? Being allowed to merge? Something simple and customary like using a turn signal would probably not motivate other drivers to give you a you a thumbs up, but not using your turn signal would probably get you a thumbs down. I suppose this could be aggregated across all drivers and what counts as a good driver would be based on having only a few thumbs down rather than a multitude of thumbs up? Khoffman 11:27, 15 January 2008 (EST)
    • I recognize that good/bad reporting can happen on e-bay and other reputation systems, however, I noted that e-bay overall has positive results. As for driving, I am reluctant to implement a similar system because the abuse will have a different result. Although the positive may outweigh the negative, is it justified if certain groups suffer the negative results? I do not think I'm giving humanity too little credit. It only takes a handful of bad acting citizens to destroy the usefulness of this reputation system. "How's My Driving" program may be a great idea, but I am not eager to implement such a system until there are mechanisms to avoid bad behavior. KStanfield 13:09, 15 January 2008 (EST)
  • Zittrain hints that later in his book, he will address the potential (or lack thereof) to "sift out what we might judge to be bad generative results from good ones..." I'm curious what 'points of control' exist in Privacy 2.0 Cjohnson 10:12, 15 January 2008 (EST)
  • What happens when Privacy 1.0 and Privacy 2.0 collide? That is, when the government or corporations begin to utilize the "digital dossiers" compiled/produced by the masses? Zittrain's chapter seems to focus on end-users making use of the end-user generated dossiers, and the privacy concerns that result. Another implication could be an enhancement of Solove's concerns about private corporations invading privacy - web 2.0 allows them to costlessly outsource the compilation of personal data. Cjohnson 10:12, 15 January 2008 (EST)

Utilitarian Take

On p 213, Prof. Zittrain states that a uttilitarian might say the high number of hits viewing the "Star Wars Kid" might evince a benefit exceeding the cost in the form of loss of privacy to one. How do we weigh social costs and benefits in instances like this? Are we back to the "elite" excercize of judging the true benefit of entertainment seperate from popularity?

Aside from the costs to the student, is it possible there are costs to society we are missing? For example, this might desensitize a viewer to privacy concerns, or even acclimate them even more to having fun at another's expense (although I'm not sure this can be done to a greater extent than hgih school itself.) Are they missing out on better forms of entertainment when viewing things that play to these baser instincts? -Lciaccio 13:24, 15 January 2008 (EST)

Facebook Responds to Privacy Concerns

  • I think one of the most important points to take away from this is that default settings matter. When the default on Facebook was to have Beacon as an opt-out service, there were many stories of users who hadn't even noticed pop-up windows asking whether they wanted their purchase to be reported on Facebook. As a result, some holiday presents that users had bought for their friends appeared in those same friends' Facebook news feeds. Thus, the opt-out default turned out to cause real problems. Even though Facebook has changed many of their policies regarding Beacon, in order to entirely opt out of the service one must affirmatively change their Facebook settings and state that they never want their activity on outside websites reported to Facebook. I get the sense that if Beacon was introduced entirely as an opt-in feature, almost no average users would change their privacy settings to allow outside websites to sometimes post their behavior on Facebook. Clearly, default settings matter. --NikaE 20:15, 14 January 2008 (EST)
    • I think you hit the nail on the head with this one. Perhaps more egregious was the user whose purchase of an engagement ring at overstock.com was broadcast. (Conveying at once an intent to propose and his frugal nature). It seems a tricky calculus for businesses trying to weigh immediate profitability with sensitivities of customers/subscribers.-Lciaccio 13:29, 15 January 2008 (EST)
  • Perhaps I do not understand how Beacon technology works, but it seems odd to me that all the attention/criticism is directed at Facebook. Is it not the third-party site (blockbuster.com, overstock.com, etc.) that is transmitting the "private" information? Aren't these sites the ones that are violating a customer's confidence, by sending data to FB? Cjohnson 10:19, 15 January 2008 (EST)
    • Maybe that is evidence that consumers have just come to accept transmission of private data; it is the publication of such that gos beyond our comfort zone. On the other hand, maybe users thought that FB was 'spying' on them, and didn't realze the partner sites were complicit. -Lciaccio 13:29, 15 January 2008 (EST)

Links

Privacy as Censorship

Government vs. Private Companies

  • This article makes the claim that government databases pose much greater threats to the public than private databases. However, in Prof. Zittrain's book, he points out that the Privacy Act of 1974 applied a set of fair information practices to government agencies' records, but "Congress never enacted a comparable comprehensive regulatory scheme for private databases." Even if government records pose the potential for more harm to the public, could the lack of regulation of private databases in reality pose a greater threat? --NikaE 20:19, 14 January 2008 (EST)
  • Singleton argues that government databases pose a greater threat because the potential of abuse. He lists acts by government employees who have access to the databases and thereafter use the information to commit crimes. However, he asserts that abusive behavior by private database employees should not lead to the elimination of private databases, since punishments exists and the companies would be responsible for their employees behavior. Is the the abuse of the government databases therefore unique? KStanfield 00:13, 15 January 2008 (EST)
  • Are there any limitations on the "free speech" rights of those who assemble private databases? As far as I can tell, Singleton thinks there are not. Taken to its extreme, this seems to advocate that companies should be allowed to make these private databases publicly available. Or to share the information with any party. But wouldn't either of these actions undermine his argument that government databases need to be regulated? That is, the government could make use of private databases, because the owners of those databases can simply hand them over, directly or indirectly. Cjohnson 11:16, 15 January 2008 (EST)

Links

Free Speech

  • Singleton is very concerned with free speech, and worries that private institutions' right to free speech will be stifled if they are not allowed to freely share database information with third parties. However, isn't there a very real chance that free expression among the general public can be stifled if their information is indiscrimanately shared among third parties? For example, a person may be less willing to sign a petition if they worry that their contact information will be sold to telemarketers. Hasn't this person's free speech right been infringed upon? Are we willing to sacrifice this right in order to preserve the right of companies to exchange database information? --NikaE 20:25, 14 January 2008 (EST)
  • Pardon my haven't-taken-a-class-on-it-so-am-confused question: Do corporations and businesses have free speech rights? Wikipedia weighs in on Corporate Personhood. Jumpingdeeps 20:40, 14 January 2008 (EST)
    • This is one of the weak points in this article: the free speech rights that Singleton is so concerned with would be categorized as commercial speech and would therefore be granted considerably lower protection under the 1st Amendment. The comparison that Singleton implicitly draws, between one company communicating to another that Mrs. Jones purchased a lawnmower, and a journalist "writing a story about [Mrs. Jones's] activities" (pg. 5) simply doesn't hold up. Mshacham 12:15, 15 January 2008 (EST)
  • I think this is one case in which the 'lesser' protection for commercial speech comes into play; although still under the auspices of the first amendment, this is not the paradigmatic example of what it was trying to protect.-Lciaccio 13:34, 15 January 2008 (EST)
  • Wouldn't a requirement that the information might be shared and used find a balance that still satisfies Singleton's concerns? Companies would still be free to share the information, but customers would likewise be free not to hand it over in the first place if they didn't like how it might be shared. This would also bring about 'free market' effects, by allowing businesses to compete with their privacy policies for the rules that customers prefer. -Lciaccio 13:34, 15 January 2008 (EST)

Other Privacy Concerns

  • My biggest concern with this article, though, was that it didn't seem to ever really address the problems of databases that are perfectly indexed and searchable, even if they're not available to the general public. The article kind of skated over this, stating that even if "Mrs. Jones" lives in a small town, "the employees of the creator of the database usually will not live anywhere close by." If even one employee does, and is even slightly curious about Mrs. Jones, he can easily find out a lot of things about her simply by using his access to the database. Furthermore, private organizations often inadvertently make private data available (Prof. Zittrain points to several banks and credit card agencies that have done so), and once this happens, the easy searchability of the database becomes a really big problem. Ultimately, a database is more than just discreet gossip about individuals. Information can be easily picked out of it by those who have interest in that information. For example, there has been a rumor (I'm not sure if it's substantiated or not) that Facebook employees can see which users view other users' profiles. Operating under the assumption that this is true, for the great majority of Facebook users, who do not know Facebook employees, this is not really of any consequence to them. However, for the few users that employees take an interest in, this could be very concerning. Furthermore, employees presumably don't have to sort through every log of every user, but are able to find what they want through a simple, automated search. The searchability of databases, not the databases themselves, are more concerning from a privacy perspective, and I'm not sure that Singleton takes this argument on squarely. --NikaE 20:46, 14 January 2008 (EST)
  • The author discusses the way we collect information on each other outside of the internet, yet ignores the extent to which social norms in effect prevent the sharing of that information. Does the absense of personal relationships with customers mean that we need to add regulation to stand in the stead of social norms for e-commerce?-Lciaccio 10:57, 15 January 2008 (EST)
  • I would actually be a bit sympathetic to Singleton were the possibility of information gathering much more limited. I have two objections to private (and public) information gathering. First, there does not seem to be a limit on the comprehensiveness of a particular list. You can tell a lot about a person from the things he or she buys. I wouldn’t personally mind if Sears knew that I bought a lawn mower in the past year; however, I would find it much more problematic if Sears kept a record of every purchase I had ever made. Even if a particular store kept only a limited amount of information, there is no limit on the number of stores collecting information. Various lists could be aggregated until they provide the same degree of comprehensiveness. Singleton points to the a traditional lack of regulation, but perhaps that fact was partly a result of technological limits. Free speech didn’t need to traditionally account for privacy because the possibility of mass commercial intrusion was much more limited than it is today. Data was less easily gathered, aggregated, and transmitted to others. It seems similar to the debate we had on perfect enforcement; in a world with imperfect enforcement, penalties need to be higher because of the possibility of undetected behavior. Similarly, noting privacy law as it traditionally stood is not dispositive because the law may have taken other factors (like technological possibilities) into account. Ac 11:14, 15 January 2008 (EST)
  • Singleton repeatedly emphasizes that private databases are used for marketing / selling purposes, and are therefore not really a cause for concern. Does he give enough attention to the problems that can result when these databases are compromised? Even if companies are allowed to assemble private databases, should they be required to meet minimum security / encryption standards? This seems particularly relevant when the database contains info like credit card numbers. Cjohnson 11:16, 15 January 2008 (EST)

Reputation Economies

  • Prof. Citron's piece illustrates the problems with both more anonymity, and with less. Some users feel the need to be somewhat anonymous online in order to protect themselves from attacks in both cyberspace and the real world. Bloggers had their home addresses and social security numbers revealed. This would certainly indicate that some databases are too accessible to wrongdoers, and that individuals, corporations, and the government all need to be more vigilant about protecting information. However, Citron also points out that "groups attack less when their members are afraid of getting caught." One of the more obvious solutions to this problem would be requiring valid email addresses to non-anonymous email accounts (such as those associated with a company or university) to be associated with the accounts of people who wished to post on blogs and websites. However, the framers might find this objectionable: as Prof. Zittrain points out, they "embraced anonymous speech in the political sphere." Perhaps another solution would be to, in the context of message boards like AutoAdmit, indemnify the operators of message boards if they required valid, non-anonymous (perhaps .edu?) email addresses to be associated with user accounts. What are some other possible solutions to these problems? --NikaE 21:04, 14 January 2008 (EST)
    • This would seem to close the "black hole" of responsibility that is created by the intersection of Section 230 immunity and completely anonymous websites. Yet there are indeed costs to erasing anonyminity; it provides a valuable service towards improving the dialogue in some areas, especially when people are posting from countries like China where arguably valuable speech can be severly punished. I would think it would be possible to create a system whereby personally identifyable information is logged and kept for unlawful cases of harrassment, copyright violations, and defamation, etc. Only the courts would have the key to unlock the information and connect it to a user, so that those behaving lawfully can keep their privacy intact. Of course, it becomes difficult to draw a line that allows access by US Courts and blocks it from the Chinese government, at least without making value judgments many are unprepared to make. -Lciaccio 13:41, 15 January 2008 (EST)
  • Should cyber-harassment be a new crime, or is it adequately addressed under more traditional doctrines? Making serious death threats is a criminal offense, but posting a person's phone number and social security number probably isn't even a misdemeanor. Would this be more properly addressed ads speech meant to incite violence? How would you go about separating a legitimate intent to cause an individual harm from a mean-spirited joke based solely on an online posting? Anna 22:12, 14 January 2008 (EST)
    • This reminded me of sec. 512(b)(5), which refers to "vigorous enforcement of Federal criminal laws to deter and punish...harassment by means of computer." I don't know what these laws are, but they would seem to be highly relevant to the discussion. Ac 11:19, 15 January 2008 (EST)
    • Posting a phone number, social security number, license plate, etc can all cause harm to the person whose information is revealed (recall the list of abortionists). Tying back to Singleton's article, these are a form of "private databases" - are we truly comfortable with such databases being entirely unregulated? Cjohnson 11:31, 15 January 2008 (EST)
  • See also this page, which is a memo I wrote for another class related to this topic. (based on a AutoAdmit style hypo) -Lciaccio 11:02, 15 January 2008 (EST)

Demos

Who's a Rat

"This web site and the information contained within is definitely not an attempt to intimidate or 
harass informants or agents or to obstruct justice. This websites purpose is for defendants with 
few resources to investigate, gather and share information about a witness or law enforcement 
officer.  Freedom of speech , freedom of information act, and an individual's constitutional 
right to investigate his or her case protect this website. Some Information contained in this
website may not be 100  percent accurate and should be used for information / entertainment 
purposes only." 

-Lciaccio 11:05, 15 January 2008 (EST)

Gawker Stalker

  • According to the ever-reliable Page Six, and re-reported in Gawker, George Clooney and/or his publicist circulated a plan to flood GawkerStalker with "fake" sightings to render the site worthless. Lk37 01:49, 15 January 2008 (EST)

RapLeaf

  • At the risk of sounding paranoid, I wonder if simply searching for my reputation via my email address submits my email to some database that RapLeaf proceeds to spam? Take a look at their own Privacy Policy.
"We may send information to members and nonmembers by email about our services, including special alerts, service-related announcements, offers, awards, surveys, contests, promotions and updates. Members and nonmembers will be given the option not to receive these types of communications by clicking on the unsubscribe link in the footer of any Rapleaf email." (emphasis added)
What a drag. Jumpingdeeps 21:09, 14 January 2008 (EST)