Class 2: Difference between revisions

From Identifying Difficult Problems in Cyberlaw
Jump to navigation Jump to search
No edit summary
 
(60 intermediate revisions by 21 users not shown)
Line 10: Line 10:
'''1. The Right to Speak Anonymously'''
'''1. The Right to Speak Anonymously'''
* It would seem that the easiest way to impose IRL ethics/morality in online spaces is to make our online identities tied more closely to our 'real' identities. But at the extreme, with everyone having a single, unique online identity tied to something like Social Security numbers, we would be sacrificing our right to speak and act anonymously online. Is there a happy medium?
* It would seem that the easiest way to impose IRL ethics/morality in online spaces is to make our online identities tied more closely to our 'real' identities. But at the extreme, with everyone having a single, unique online identity tied to something like Social Security numbers, we would be sacrificing our right to speak and act anonymously online. Is there a happy medium?
* Then again, it's likely that with cyberbullying, for example, that the kids being bullied know exactly who their antagonists are, meaning that anonymity is not at the heart of the problem. So what is? Is it a lack of consequences? Or consequences that, because they are in 'real' life, are insufficiently tied to their online behavior?
**This may prevent multiple people from speaking under one name (i.e. NYU students combining to create Mr. Bungle in Julian Dibbell’s “A Rape in Cyberspace”).  How would this work with Twitter accounts that were maintained by multiple people or corporations?
* Is the Right to Speak Anonymously (or even the Right to Freedom of Expression?) harmed by forcing online users to sign in with "real identities" (achieved by requiring either verified credit-card billing names, or in less strict cases Facebook Connect) to leave comments on newspaper websites rather than leaving them anonymously as common practice in the early days of the internet
** Perhaps a happy medium could exist if there were a way to impose social norms on pseudononymous users without revealing their real space identity or requiring a unique portable identity. If users are required to use the same pseudonym on a particular site their reputation on that site is tied to their pseudonym. If the website compiled a list of all that users activities and reviews the user receives from other users, his reputation is tangible. Think eBay. Users on eBay do not often use their real identities, but they develop a palpable reputation through their interactions with other users. They are reviewed after each transaction. Though some have argued that eBay is unique because it is a commercial site, the reputation building mechanism is no longer a product of just eBay. As someone posted below, Xbox Live now has its own mechanism for reputation building through user provided feedback. Other users can access this information and decide whether they want to play with a random user name they cannot identify. If other websites/ online social networks required users to have a single pseudonym and put an emphasis on reputation building on the space, the same norms could develop. ---[[User:Alissa.d.R.|Alissa.d.R.]] 00:38, 27 September 2010 (UTC)
** The analogy to traditional 'letters to the editor' in print editions does not hold as it was much more difficult - if not impossible - to conduct a search of all the comments a given person had submitted to newspapers - in today's world, such as search would allow anyone to quickly pull together a portfolio of comments left by a given person across multiple publications.
*** A way of enforcing those norms is by putting those individuals that violate community standards on a digital scaffold. Virtual worlds in the past have attempted to curtail undesirable behavior in their communities by punishing users in a visible fashion. For example, one virtual world put users that violated its terms and conditions in a virtual jail. Other worlds, such as Second Life, have created a quasi-board of users that review violations and make a determination. Complaints are publicly displayed on a related site, and the way in which they were addressed is also listed.  The mere appearance that someone on the space cares about violations and the fact that they do not go unnoticed enhances compliance to norms. --[[User:Alissa.d.R.|Alissa.d.R.]] 00:39, 27 September 2010 (UTC) 
*What is the effectiveness though of a single, unique online identity?
 
''A. Cyber-bullying and Cyber-rape?''
 
*Generally: should people be held accountable for online behavior that causes real world effects?  These effects are typically emotional harm, but victims can also be so affected that they cause physical harm to themselves.
*It's likely that with cyberbullying, for example, that the kids being bullied know exactly who their antagonists are, meaning that anonymity is not at the heart of the problem. So what is? Is it a lack of consequences? Or consequences that, because they are in 'real' life, are insufficiently tied to their online behavior?
** Although in some cases, such as the highly publicized [http://en.wikipedia.org/wiki/Megan_Meier Megan Meier case], it was not known who the cyberbully was. In fact, in that case, the "real world" bully was unknown in large part because the profile was of a fictitious person to mask the real identity of the bullies.
** Perhaps we should split it into two categories -- "real name" bullying and "masked" bullying. In both of these categories, the lack of real world (adult) supervision or serious consequences (so far) could be fueling the bullying.
***'''"Real name" bullying''' usually takes place on sites that encourage users to associate their online and real world identities.
****E.g. sites like Facebook and MySpace (the Meier case notwithstanding) and instant messaging systems (AIM, gchat).  Bullying on these sites is often highly concentrated among adolescents who have migrated traditional bullying onto the Internet.
****Because the real world identities of these bullies are often easily identifiable, some mode of authoritative supervision could calm this type of bullying.
****(Zara) The recent suicide of Tyler Clementi illustrates that online harassment can also occur when someone's personal information is publicized to the world against their knowledge or consent, not just through private, individually targeted chats. Public humiliation also has its real-world bullying counterpart, but the opportunities for bullies to broadcast humiliating information to millions of a victim's peers magnifies the problem. Someone pointed out that many issues in cyberlaw are centered around issues of access. Although issues of access and control are present in any discussion of the bundle of rights in real property, like issues in crowdsourcing, these issues are particularly salient in the internet domain. It has been argued that the crimes of harassment and invasion of privacy are adequate to protect against cyber-bullying, but it seems to me that statutes specifically tailored to punish cyber-bullying help reinforce the message that this kind of conduct is morally and legally unacceptable, which in turn helps bolster educational campaigns against it. The bully's level of culpability for unintended effects is also an issue, as the public demands criminal punishment in these horrific cases. [http://www.nytimes.com/2010/10/03/weekinreview/03schwartz.html?scp=1&sq=suicide%20gay&st=cse]     
*****In some cases, however, such as the highly publicized Megan Meier case, the cyberbully was unknown. In fact, in that case, the "real world" bully was unknown in large part because the profile was of a fictitious person to mask the real identity of the bullies.
*****What if someone has an online persona that looks like it may be real-world, but is really an elaborate satire?  (e.g. Stephen Colbert’s Twitter account)  If one of those accounts was to satirically “bully” someone, and they didn’t get the joke, how would that fit in here?
****** This might fall under the traditional reasonable person standard. If a reasonable person would have known it was satire, then that would be enough. Still, one can imagine a scenario where, despite knowing that it is satirical, the "bullying" conduct still causes the harms sought to be avoided.
***On the other hand, '''"masked" bullying''' occurs when bullies are anonymous posters who usually hide behind pseudonyms not connected to their legal names.
****E.g. JuicyCampus and CollegeACB (see http://www.collegeacb.com/ and a corresponding article: http://ksusentinel.com/arts-living/students-become-source-of-anonymous-bullying/ ), or message boards like World of Warcraft and LambdaMOO.
****An ongoing case of "masked bullying" is the trial of William Melchert-Dinke, who has been criminally charged with 2 counts of assisting suicide -- using a female alias on a suicide-related message board, he allegedly convinced several people to kill themselves.  The trial in ongoing.  He made a confession to police but now claims he did not know that encouraging suicide was unlawful. [http://www.nytimes.com/2010/05/14/us/14suicide.html?_r=1&hp New York Times article].  Is the language Melchert-Dinkel used in online forums encouraging suicide protected by the  First Amendment?
****World of Warcraft Real Names Controversy
******[http://blogs.cisco.com/security/comments/blizzard_real_id_privacy_concerns/ Cisco article] describing Blizzard's proposed changes to its message board identification system.
******[http://forums.worldofwarcraft.com/thread.html?topicId=25968987278&sid=1 Blizzard's response] to the controversy the proposal created. In this message board post, Blizzard gives up its plan.
******Blizzard's proposal was tied to its [http://us.battle.net/en/realid/faq Real ID] plan, which is an interesting way to bring real world identity to the online realm in an unobtrusive manner.
****Here, it may be harder to have any notion of supervision, because the sites themselves facilitate and '''promote anonymity and likely will invoke free speech claims''' to avoid working with regulators. -- JPaul (Jenny)
*** Can we distinguish sites that encourage anonymous posting because they want a wide variety of opinions and want to promote general speech from sites like Juicy Campus that used and assured anonymity to its users to elicit defamatory speech. The Communications Decency Act protects ISPs from suits for the content posted by third parties on their sites, but if they are eliciting a certain type of speech, shouldn't they lose their immunity? --[[User:Alissa.d.R.|Alissa.d.R.]] 01:38, 27 September 2010 (UTC)
* Concerning cyber-rape, there are already video games that have been sold such as Japan's controversial [http://articles.cnn.com/2010-03-30/world/japan.video.game.rape_1_game-teenage-girl-japanese-government?_s=PM:WORLD Rape-Lay], where the object of the game is to sexually assault women in various ways. While the game does not have an online component, increasingly video games are being sold with such online interactivity. Moreover, cyber-rape has apparently [http://www.wired.com/culture/lifestyle/commentary/sexdrive/2007/05/sexdrive_0504 occurred] in online community games such as Second Life.
 
''B. Free Speech''
 
* Is the Right to Speak Anonymously (or even the Right to Freedom of Expression?) harmed by forcing online users to sign in with "real identities" (achieved by requiring either verified credit-card billing names, or in less strict cases Facebook Connect) to leave comments on newspaper websites rather than leaving them anonymously as common practice in the early days of the Internet
**Difficult to analogize to traditional 'letters to the editor' in print editions because it was much more difficult - if not impossible - to conduct a search of all the comments a given person had submitted to newspapers.  Today, however, such a search would allow anyone to quickly pull together a portfolio of comments left by a given person across multiple publications.
*** Humorous example of this: [http://www.mcsweeneys.net/links/commenter/ "Get to Know an Internet Commentator" by Kevin Collier.]
**If there is a right to speak anonymously, what are its limits? Are third parties required to reveal real identities when necessary to name a party to a lawsuit? What showing of harm to another person might be required? Under what circumstances are subpoenas of Internet service providers legal? It seems to me that the First Amendment justifications for anonymity are lacking in cases that cause harm to others, like cyberbullying, such that any right is extinguished. How absolute is this right?
 
''C. What about people who try to keep RL and Internet personas separate?''
 
*While it is true that online identities have been increasingly merging with real-world identities through the use of real names, photographs, etc. on Facebook and similar sites, this is not always the case.  Some Internet users seek to keep their real-world actions (part of their real-world identity) off the Internet.
**E.g. On WoW and XBOX Live (among others), some people seek to maintain fictitious or anonymous online identities.
*A recent Supreme Court decision in [http://www.supremecourt.gov/opinions/09pdf/09-559.pdf ''Doe v. Reed''], however, considered when real-world acts are publicized, and importantly for our purposes publicized on the Internet. In that case, petition signers hoping to repeal a Washington state law that granted same-sex couples rights akin to marriage sought to prohibit others from gaining access to the petitions under the First Amendment. Opponents of the petitions intended to put the names and addresses on the petitions online in a searchable format. The Supreme Court held that '''public disclosures of referendum petition signers and their addresses''', on its face, '''did not present a First Amendment violation'''.
 
 
'''2. What is the effectiveness of a single, unique online identity?'''
 
*'''''Possible case study: Microsoft's XBOX Live online service'''''
*'''''Possible case study: Microsoft's XBOX Live online service'''''
**Microsoft's XBOX Live services assign users a unique online username that identifies the user across all the games and services offered by XBOX Live. This unified identity allows a user to easily maintain relationships with other users, compare past accomplishments and activities, and effectively establish an online community.  
**Microsoft's XBOX Live services assign users a unique online username that identifies the user across all the games and services offered by XBOX Live. This unified identity allows a user to easily maintain relationships with other users, compare past accomplishments and activities, and effectively establish an online community.  
Line 21: Line 61:
***''Architecture'': Microsoft has built in the rating system listed above. As a closed platform, Microsoft also has the ability to ban a user from online activities. This would force the user to purchase another account and would prevent that user from associating himself or herself with past accomplishments and reputation. Given this ominous power, one should be extra careful not to do anything to warrant banning.
***''Architecture'': Microsoft has built in the rating system listed above. As a closed platform, Microsoft also has the ability to ban a user from online activities. This would force the user to purchase another account and would prevent that user from associating himself or herself with past accomplishments and reputation. Given this ominous power, one should be extra careful not to do anything to warrant banning.
***''Laws'': nothing outside of normal tort laws
***''Laws'': nothing outside of normal tort laws
**Result: this requires more research and testing. However, common wisdom (note: this is from my own personal experience has someone who has played online and has read many opinions about the service) is that communication on XBOX Live is a morass of racist, sexist, and violent comments. Many individuals refuse to communicate online anymore. Despite all of Microsoft's safeguards, there is not an effective deterrant to this type of behavior.
**Result: this requires more research and testing. However, common wisdom (note: this is from my own personal experience has someone who has played online and has read many opinions about the service) is that communication on XBOX Live is a morass of racist, sexist, and violent comments. Many individuals refuse to communicate online anymore. Despite all of Microsoft's safeguards, there is not an effective deterrent to this type of behavior.
*Are entities like Microsoft hampered by the fact that these online identities, in a sense, don't matter? If I have a unique XBOX Live identity, how am I harmed outside of the XBOX Live community if I act poorly online and am banned? This won't harm my relationships outside of this online realm or ability to get jobs.
*Are entities like Microsoft hampered by the fact that these online identities, in a sense, don't matter? If I have a unique XBOX Live identity, how am I harmed outside of the XBOX Live community if I act poorly online and am banned? This won't harm my relationships outside of this online realm or ability to get jobs.
*Do we need to make more "real world" ramifications? For example, if law firms required me to list my XBOX Live account name, my Facebook account URL, and my Twitter name (and required me to make all of them public), that would greatly change how I act online. Is the notion of an online identity affected by OCS telling all students seeking law firm positions to make their Facebook profiles as secret as possible?
 
*How does requiring a persistent identity mesh with policies behind law? Minors may have opportunity to expunge or seal criminal records under the concept of learning and youthful mistakes. However, this is a system completely controlled by the government. Given the nature of the Internet, it may be impossible to offer a similar service, as website could continue to cite the damages caused by an online user, who is easily traceable to a real world individual. If we could expunge a minor's record, do we want to? For security reasons, we may not want minors to be identified as such online. Therefore, people will treat an online identity that belongs to a minor as if it belonged to an adult. Similar to minor being held to adult standards when participating in adult activities under tort law, should we hold minors to adult standards if they are perceived as adults in the online realm (which would include not allowing their record to be erased)?  
''A. Do we need to have more "real world" ramifications for online activity? ''
 
*For example, if law firms required me to list my XBOX Live account name, my Facebook account URL, and my Twitter name (and required me to make all of them public), that would greatly change how I act online. Is the notion of an online identity affected by OCS telling all students seeking law firm positions to make their Facebook profiles as secret as possible?
** Along the same lines, is it going to be possible for future generations to truly maintain their private and professional lives separate or has their use of the Internet from an early age necessarily made that nearly impossible. Can employers justify hiring or promotion decisions based on observations made outside of the work place that have been gathered online. --[[User:Alissa.d.R.|Alissa.d.R.]] 01:38, 27 September 2010 (UTC)
 
''B. How does requiring a persistent identity mesh with policies behind law? ''
 
*Minors may have opportunity to expunge or seal criminal records under the concept of learning and youthful mistakes. However, this is a system completely controlled by the government. Given the nature of the Internet, it may be impossible to offer a similar service, as website could continue to cite the damages caused by an online user, who is easily traceable to a real world individual.
*If we could expunge a minor's record, do we want to? For security reasons, we may not want minors to be identified as such online. Therefore, people will treat an online identity that belongs to a minor as if it belonged to an adult. Similar to minor being held to adult standards when participating in adult activities under tort law, should we hold minors to adult standards if they are perceived as adults in the online realm (which would include not allowing their record to be erased)?  
**Google CEO Eric Schmidt predicts, according to an August 4, 2010 interview in the WSJ, that "every young person one day will be entitled automatically to change his or her name on reaching adulthood in order to disown youthful hijinks stored on their friends' social media sites."   
**Google CEO Eric Schmidt predicts, according to an August 4, 2010 interview in the WSJ, that "every young person one day will be entitled automatically to change his or her name on reaching adulthood in order to disown youthful hijinks stored on their friends' social media sites."   
**However, allowing someone upon turning 18 to disown "youthful hijinks" promotes a culture that separates consequences from actions. Instead of eliminating the past, why don't we provide it with more context? As a proposal, why don't we use the architecture/law prongs of the Lessig test to create a structure in which the online activities of an individual, from his first entry into the online world to the last, are stored on a server (this is extremely big brother-ish but let's just play this out). The user can establish as many identities online to represent themselves to other uses, but all of these identities are connected to the user's real world identity. All of the actions a user takes as a minor are branded as the actions of a minor. The way we would then use this information would be similar to a background or credit check. Employers looking to hire the user can request a report on his Internet activity. They would then receive a report that details his actions. This could either be exhaustive, a general overview, or just if others have complained about his actions. This report will indicate what the user did and when in his lifetime he did it. Therefore, if the user did something embarassing or bad, this report will provide more context than a mere Google search.
**However, allowing someone upon turning 18 to disown "youthful hijinks" promotes a culture that separates consequences from actions. Instead of eliminating the past, why don't we provide it with more context?
***As a proposal, why don't we use the architecture/law prongs of the Lessig test to create a structure in which the online activities of an individual, from his first entry into the online world to the last, are stored on a server (this is extremely big brother-ish but let's just play this out). The user can establish as many identities online to represent themselves to other uses, but all of these identities are connected to the user's real world identity. All of the actions a user takes as a minor are branded as the actions of a minor. The way we would then use this information would be similar to a background or credit check. Employers looking to hire the user can request a report on his Internet activity. They would then receive a report that details his actions. This could either be exhaustive, a general overview, or just if others have complained about his actions. This report will indicate what the user did and when in his lifetime he did it.
***Therefore, if the user did something embarassing or bad, this report will provide more context than a mere Google search.
*Should the notion of an online identity include search results? If so, how should this information be controlled? Wikis indicate when an entry is in dispute. Could search engines provide a similar service for websites attacking individuals? Is it technically possible to screen for negative publicity and separate out these websites in a search?
 
''D. Reputation Bankruptcy?''
 
*In an [http://futureoftheinternet.org/reputation-bankruptcy article] on his blog, Professor Zittrain discusses the concept of reputation bankruptcy.
**I find the tie between the solution to identity issues and bankruptcy to be troubling. First off, one of the major problems with online reputation is not a completely toxic reputation, but a singular occasion of poor discretion or the malicious agendas of a few individuals.
***E.g. In the [http://www.washingtonpost.com/wp-dyn/content/article/2007/03/06/AR2007030602705.html AutoAdmit] case, users of a law school message board maliciously attacked a Yale Law student. These negative posts were among the first page of hits in a Google search of the student's name. The student received no offers from law firms and although no causal connection was proved, these negative posts could have been a reason why she did not receive offers. In cases like these, individuals need selective erasing of third party attacks or a past mistake that are unfair restrictions on a promising future. However, Professor Zittrain suggests that reputation bankruptcy could involve wiping everything clean, "throwing out the good along with the bad." The new start of bankruptcy is a dramatic change and should be used for dire situations. For many cases, it may be a case of using a sledgehammer to kill a fly.
**Another concern is similar to the one I proposed to the idea of "disown[ing] youthful hijinks." If someone can declare reputation bankruptcy, that individual is saved but the communities he or she was a part of are still scarred by that individual's actions. If, in the aforementioned XBOX Live case study, a user could declare reputational bankruptcy, that would do nothing to help XBOX Live's negative reputation and people's unwillingness to engage in that community. Under bankruptcy, creditors are able to receive some amount of compensation for the debts owed to them. What kind of compensation can online communities receive in reputation bankruptcy?
*How do we teach youths just entering the online world how to interact with it and maintain a praiseworthy identity? [WSJ
*How do we teach youths just entering the online world how to interact with it and maintain a praiseworthy identity? [WSJ
'''2. Facebook Profile Portability'''
 
 
'''3(a). Facebook Profile Portability'''
 
Let's do more research into data portability as a privacy policy, which relates to above. Facebook could be a good case study.  What options and protections are there to port an online identity / profile i.e. Facebook messages, friend listings, and wall-postings?  What can and can not be permanently deleted?
Let's do more research into data portability as a privacy policy, which relates to above. Facebook could be a good case study.  What options and protections are there to port an online identity / profile i.e. Facebook messages, friend listings, and wall-postings?  What can and can not be permanently deleted?
* It has been argued that Facebook has created a "semipublic" shared space for exchange of information. [http://www.nytimes.com/2009/02/19/technology/internet/19facebook.html?_r=1 NYT] If I send a private message or make a post on my wall, such information would be owned by me. But what about posts made by other people on my wall, or pictures and video I have been tagged in?
* It has been argued that Facebook has created a "semipublic" shared space for exchange of information. [http://www.nytimes.com/2009/02/19/technology/internet/19facebook.html?_r=1 NYT] If I send a private message or make a post on my wall, such information would be owned by me. But what about posts made by other people on my wall, or pictures and video I have been tagged in?
** What happens to these shared data if I close my account?
** What happens to this shared data if I close my account?
** Should information uploaded by other people become part of my online identity? If I look at my facebook wall I can see that only a minor part of it is made of my own contribution. Would my online identity be the same without other people's contributions?
** Should information uploaded by other people become part of my online identity? If I look at my Facebook wall I can see that only a minor part of it is made of my own contribution. Would my online identity be the same without other people's contributions?
** And what about my posts on other people's wall? Are those part of my online identity? If we own our personal information, should we own also our posts on other people's walls?
** And what about my posts on other people's wall? Are those part of my online identity? If we own our personal information, should we own also our posts on other people's walls?
** Let's assume we have complete online portability of our online identity, including material submitted by third parties: what are the privacy implication of this from a third party's perspective? Are we ok with the third party's posts and tagged pictures being transferred? Should the third party be notified? Should the third party give express consent?
** Let's assume we have complete online portability of our online identity, including material submitted by third parties: what are the privacy implication of this from a third party's perspective? Are we ok with the third party's posts and tagged pictures being transferred? Should the third party be notified? Should the third party give express consent?
'''3. Applying Privacy Policies Worldwide'''
* (From Erin:) I know that both [http://www.readwriteweb.com/archives/goog-fb-data.php Facebook and Google] joined [http://dataportability.org/ DataPortabillity.org].  I think that a lot of blogging software (livejournal, typepad, wordpress, blogger, etc.) have been at the forefront of allowing pretty easy portability between systems.  So facebook is probably the more interesting case study in terms of a "difficult problem" — particularly as it relates to third party information — but it may make sense to look at these as a start (esp the analogy to third party comments on your blog)...
 
 
'''3(b). General Online Identity Portability'''
 
While 3(a) focuses on Facebook, how about tackling the more general question about our online identities:
* How should online identity be regulated?
** Lessig's framework, incl. benefit analysis of potential laws, norms, market mechanisms?
*** re laws: 3 privacy laws passed at state level, federal privacy law potentially coming in next Congress - what will (should?) it include regarding online identity?
*** re norms: self-regulation: could talk about OAuth, OpenID here and what role they could play
*** re market place: thinking about Google's leaked internal communication from 2008 about creating a market place for privacy/personal data)? See: [[http://bit.ly/crHT3J]]
* What frameworks / initiatives do currently exist? Who has, should have control - Government vs private sector (Could Facebook be ''the'' personal online ID provider)?
** NSTIC (National Strategy for Trusted Identities in Cyberspace) [[http://www.nstic.ideascale.com/]], first draft strategy document published in June 2010, available for download on homepage
*** Goal of initiative: Identify solutions ensuring (1) Privacy, (2) Security, (3) Interoperability, (4) Ease-of-Use
*** Key roles and perspectives to analyze: Individuals, Identity Providers, Attribute Providers, Relying Party
***(Source: NSTIC Presentation, Trusted Identities Panel, OTA Online Security and Cybersecurity Forum, Washington D.C., September 24, 2010) --[[User:Reinsberg|Reinsberg]] 19:33, 26 September 2010 (UTC)
***Vision proposed by NSTIC: "An individual voluntarily requests a smart identity card from her home state. The individual chooses to use the card to authenticate herself for a variety of online services, including: Credit card purchases; Online banking; Accessing electronic health care records; Securely accessing her personal laptop computer; Anonymously posting blog entries; and Logging onto Internet email services using a pseudonym."
 
 
'''4. Applying Privacy Policies Worldwide'''
 
What are the challenges social networks face at the international level and in countries other than the US?
What are the challenges social networks face at the international level and in countries other than the US?
* Are privacy policies adopted by social networks enforceable everywhere?
* Are privacy policies adopted by social networks enforceable everywhere?
** Consider Facebook approach: Facebook adheres to the Safe Harbor framework developed between US and EU as regards the processing of users' personal information. [http://www.export.gov/safeharbor/eu/eg_main_018476.asp Safe Harbor] Is this enough to shield Facebook from privacy claims coming from outside the US? What about countries outside the UE?
** Consider Facebook approach: Facebook adheres to the Safe Harbor framework developed between US and EU as regards the processing of users' personal information. [http://www.export.gov/safeharbor/eu/eg_main_018476.asp Safe Harbor] Is this enough to shield Facebook from privacy claims coming from outside the US? What about countries outside the UE?
** Should Facebook be concerned at all about its international responsibility? Consider the case of the Google executives convicted in Italy for a breach of privacy legislation. Assuming the conviction is upheld in appeal, can it ever be enforced? Where are the offices of the company? Where are the servers? Where are the data actually stored and processed?
** Should Facebook be concerned at all about its international responsibility? Consider the case of the Google executives convicted in Italy for a breach of privacy legislation. Assuming the conviction is upheld in appeal, can it ever be enforced? Where are the offices of the company? Where are the servers? Where are the data actually stored and processed?
* More generally, what types of information created by users is 'personal data' about which they have/should have a reasonable expectation of privacy and should be subject to regulation?
**The line between personal information about which people have a reasonable expectation of privacy and information that is not personal and that need not have restrictions relating to privacy can be a difficult one to define. For example, is information about how a driver drives a car that gets recorded on an in-car computer and potentially transmitted to a car rental or the car manufacturer 'personal' information that is/should be covered by data protection laws? What about information that is picked up by Google when taking images for Google Street View (e.g. IP addresses of neighbouring properties)? (See discussion in Information Commissioner’s Office (UK), Statement on Google Street View, August 2010). The problem is that in many cases this information on its own does not identify a particular individual but that it could be used in combination with other information to identify people. Yet when we use the Internet so much information is created and it may not all be information that should be subject to privacy regulation.  See discussion about this problem in a New Zealand context in Review of the Privacy Act 1993, NZLC 17, Wellington, 2010 (Australia and the UK are considering similar issues).
*Who/what are the main sources of privacy invasion that people anticipate? Is it the same privacy invasion if it comes from a company (Google), a government looking for potential terrorist activity, or just an acquaintance who likes to Facebook-stalk others?
**Do we hold private companies to a higher standard if we know that they have the means to protect our privacy more? Should such companies be held to the same standard in every country? If not, aren't there problems with information that is accessible in some countries but not in others?
'''5. Cyber-security'''
* Cyber-space was first used by script-kiddies as a playground for web defacement, etc, then discovered by criminals as a new means to expand their activity followed by transnational crime syndicates, followed by hackers with a political - "hacktivists" - until eventually also government discovered cyber-space. Since the DDoS attacks on Estonian websites in 2007 pushed the issue in NATO circles, cyber-security has been increasingly in the headlines. A number of questions emerge from this:
* '''Real threat vs. threat inflation'''. How much of the what is written in newspaper articles and books is much ado about nothing and what can be considered a real risk? If there is a risk, is there also a threat? What determines what constitutes a threat? Richard Clarke's book "Cyber-war" paints a gloomy picture. Self-interest by an author working as a cyber-security consultant or is there more to it?
* '''Cyber-crime <-> cyber-espionage <-> cyber-hacktivism <-> cyber-terrorism <-> cyber-war''' (cyber-intrastate war/cyber-interstate war). Costs today? Costs tomorrow? Technical solutions? Policy/legal solutions? National/international level? State vs non-state actors? Public/private?
* '''Cyber-war vs. cyber-peace'''. Why is much of the literature using language such as "cyber-war", "cyber-attack", etc and not language such as "cyber-peace", "cyber-cooperation"
* '''Terminology'''. What is the difference between a cyber-hacktivist and a cyber-terrorist? What constitutes a "cyber-attack"? Given cyber-space's virtual borderlessness is it appropriate to speak of defense/offense or active/passive (e.g. the Outer Space convention)? Is cyber-space a territorium like the High Seas, Antarctica or Outer Space? Or a new field after land (army), sea (navy), air (air force), cyber? Is cyberspace a "cultural heritage of mankind"? Relationship between virtual and kinetic.
* '''Civilian vs military'''. How is cyber-security changing the relationship between civilian and military? DoD is responsible to defend .mil, DHS responsible to defend .gov. What about the other domains? The German DoD is responsible to defend the German military network, the Ministry of Interior responsible for the government websites. How do civilian Ministries of Interior with police forces relate to a cyber-attack outside the country usually an international attack being the responsibility of the military branch of a democratic government? What are the lines of authority, e.g. for the planting of logic bombs or trapdoors?
**What would the authority of the military be in addressing attacks on civilian networks, if any? Does the government have a role or responsibility to address non-government networks? Structurally and legally, how would implementing this role be done? Would there be any problems of privacy protection and government over-interference?
*If the government is going to take a role in strengthening private network security, which networks should it protect first? Who should be involved in the oversight of this protection--military, civilian gov. actors, private actors?
**Would these actions fall under the Cyber Command?
** The New York Times reports that "the new commander of the military’s cyberwarfare operations is advocating the creation of a separate, secure computer network to protect civilian government agencies and critical industries like the nation’s power grid against attacks mounted over the Internet." [http://http://www.nytimes.com/2010/09/24/us/24cyber.html?scp=4&sq=department%20of%20defense&st=cse]
*** Are "secure zones" a viable solution to protecting critical infrastructure? Is this an oversimplified vision of secure systems that assumes cybersecurity is analogous to real space? What are the drawbacks to this approach? The article notes that the cyberwarfare commander did not demarcate the line between public and restricted government access.
* '''Role of private actors'''. How are ISPs, hardware and software companies integrated into the discussions/policy-/law-making process? How much power do they have? Allegiance to profit? Allegiance to country? Allegiance to open cyber-space? Are there public private partnerships? Do they work? What are their strengths/weaknesses?
* '''Role of hackers'''. In the early days, the battle was government vs. hacker or state vs. hacker guided by a hacker ethics. This was before the internet expanded around the globe and in the Western tradition of state vs individual. After the expansion, how has this relationship changed? Is there a transnational hacker-culture or are hackers of country X more closely aligned with government of country X vs hackers of country Y more closely aligned with government of country Y rather than hackers of X and Y aligned vs governments of X and Y?
* With the attribution problem and the transition problem (virtual-physical world) '''how much security is necessary and how much generativity possible'''? What can be done to reduce the risk? What can be done to reduce the threat? International convention? Code of conduct among major companies? International confidence-building measures?
* '''Enforcement'''. How could an international regime/agency look like solving the security dilemma? A cyber-IAEA? Or could a regime that exists now (such as NATO) be more effective?
**What responsibility would countries have for hackers / attacks originating from their own countries? How could one separate attacks from a private individual (that a country could disaffirm responsibility from) and from a government-sponsored initiative?
**What are the main sources of the threats that the US and other countries are anticipating? Are they state-based? How, if at all, would this affect our relationships and foreign relations with those states?
**What kind of retaliation would be appropriate, once an attack has been discovered? If necessary, would a country engage in counter-cyber attacks, or more traditional retaliation such as economic sanctions or even military action?
**Does the US have special responsibilities for a global safe and free Internet? Should it take the lead in preventing attacks in other countries that are less equipped to protect themselves?


== '''Group Two:''' ==
== '''Group Two:''' ==
Line 51: Line 159:
-first amendment rights with online comms (going through someone’s infrastructure)
-first amendment rights with online comms (going through someone’s infrastructure)


'''1. Death and digital property'''
* What happens to your virtual possessions when you die? Who owns your various accounts (Facebook, Google, WoW, etc)? Do the terms of service, unread by everyone, dictate post-mortem property rights? Would the ToS be enforceable if an heir brings an action to recover a deceased's virtual possessions? Should companies be required to maintain a user's data (and for how long?) after they die until an heir comes forward to claim or discard it?
* At the moment most virtual possessions are purely sentimental in value. But we can imagine a time in the not-to-distant future in which a virtual possession has significant real world value. For example, someone with a large and devoted twitter following could monetize their account by licensing its usage to an advertiser. Suppose someone, let's call her Martha S., decides to use her twitter feed as a source of revenue in this manner and develops it into a brand. Upon Martha's death, what rights do her heirs have relative to twitter as to the use of her brand? Furthermore, if Martha had entered into long-term licensing deals regarding her twitter brand and dies suddenly, where does that leave her licensees? Certainly these issues will only get more complicated as the Internet becomes more complex and the value of virtual possessions increases.
* The fundamental question is who owns a property that is created by a user in a private party's virtual sandbox? The user? The company? Or should there be some sort of co-ownership/co-authorship?
*[http://arstechnica.com/tech-policy/news/2010/03/death-and-social-media-what-happens-to-your-life-online.ars Ars Technica] article on how Facebook, MySpace, Twitter, and Google subsidiaries treat death.
*  Whole set of websites attempting to address this issue, and allow some users autonomy to make decisions about what happens to their accounts after they are unable to manage them anymore:
**  [http://legacylocker.com/ Legacy Locker] "is a safe, secure repository for your vital digital property that lets you grant access to online assets for friends and loved ones in the event of loss, death, or disability."
** [http://www.mywebwill.com My Webwill] "allows you to make decisions about your online life after death. You can choose to deactivate, change or transfer your accounts, like Twitter, Facebook or your blog. At the time of your death we perform your wishes."
** But given the transience and uncertain future of so many of these companies, can someone really interested in this trust that the sites will still be around?  Do you need to keep updating it for every new service that you sign up for which, over the course of several years, will presumably include several different sites?  Should it either be a more centralized solution (say, a service provided by the government that has a feeling of more permanence) or more decentralized (say, a last will and testament that you can basically draft on your own).
'''2. Other disconnected thoughts on digital property and avatars'''
* Nature of Property Rights
** Related to the above discussion of Martha S., can a user license the use/image of their account/avatar to third parties? Does the website own the avatar? Do users have a right to publicity for the use of their account or avatar? I.e. can Second Life take John Doe's avatar and use it to sell virtual Sham-Wows without Doe's consent?
**Can you have a copyright in your avatar? In many instances they could count as pictorial, graphic or sculptural works. Is/should the copyright be shared with the owner of the software that creates the virtual world? How should these rights be balanced?
** One objection to copyrighting an avatar might be that the user is simply recombining various elements which are each already part of a copyrighted work (the software) created by the company. However, at best this merely makes the avatar a derivative work. Think of a mobile made from random junk, or a scrapbook/collage created from photos and magazine clippings. Just because a user's medium and materials are limited by the confines of the software does not mean that the user's avatar has insufficient originality of expression.
** When affording property rights, intellectual or otherwise, to users, we must not forget to account for the economic incentives this will create for the companies that create the software that enables the users in the first place. If every user has copyrights and other rights relating to their accounts and avatars, there could potentially be a negative impact on the innovation and creativity of the software companies. Is it worth it to Blizzard to deal with all the legal headaches associated with such rights? Maybe. Is it worth it to some capital-poor start-up company with the Next Big Thing? Maybe not.
*** Could this all be side-stepped with ToS and EULAs that assign rights to the company? Could this assignment be later revoked by the user? Is any of it enforceable?
* Enforcement Issues
** Do or should users have some sort of recourse for avatar defamation?
** If they fall under copyright law, could avatars be seen as fundamentally useful articles and thus not protected? This would obviously require a case by case analysis ("No, Bob's character is not protectable simply because he carries an ornate giant broadsword; that sword is necessary for killing orcs").
** How do we measure the value of virtual property across multiple platforms? Are there enforcement rights for invasion of my Second Life property if that invasion takes place on Facebook?
** in a broader sense, how is virtual property tracked, and how does it evolve? at least one company has had the idea of creating some kind of central registrar: [http://digitalpropertyregistry.org/ the rather unimpressive Digital Property Registry].
'''3. Speech and Censorship'''
* ''Speech, Censorship, Statistics.'' Should we be concerned with an ISPs' and website owners' ability to aggregate and control information and speech? It seems that at least Google thinks that Internet users may be concerned with this topic. Google recently announced the "Transparency Report," which (incompletely) tracks usage statistics by country, as well as Google's removal of online material at the Government's request.[http://www.google.com/transparencyreport/ Google] How should corporations manage such governmental requests. What rules should it apply? How should it decide on a set of rules and whether they are catholic or case specific? What benefits are realized by providing publicly this information--particularly the tracking information? How can users or other entities use this information?
**Should Google be required to censor search results when they are manipulated for harm by users?  The Superior Court of Paris thought so [http://www.telegraph.co.uk/technology/google/8027967/Google-convicted-of-defaming-French-user-by-linking-his-name-to-rape-in-searches.html in this ruling].
* When using a digital soapbox created and hosted by a private party, what First Amendment speech rights does a user have? There may be a necessity to publicize a message using privately owned infrastructure. Does the owner of this infrastructure have a right to regulate the content of the messages being sent or discriminate between users on the basis of their messages' content? If the Internet is the new town square, then shouldn't everyone have the right to yell whatever nonsense they want?


== '''Group Three:''' ==
== '''Group Three:''' ==
Line 58: Line 193:
-transparency on internet services (google: how does it work?)
-transparency on internet services (google: how does it work?)


'''1. Liability for Security Breaches and Flaws'''
*Software insecurity:
** Security guru Bruce Schneier has argued that imposing tort liability is desirable as a method of forcing vendors to internalize the costs of insecure software.  See [http://www.schneier.com/essay-025.html Liability Changes Everything] and [http://www.schneier.com/blog/archives/2004/11/computer_securi.html Computer Security and Liability].
***How convincing is his suggestion?  What sorts of costs would this impose on software companies?  Would such a rule drive small players out of the security market?  Would individual contributors to open source projects potentially face liability?
** Law professor Michael D. Scott makes a similar argument, and notes that Sarbanes-Oxley requires publicly traded companies to certify that their systems are secure, while imposing no obligations on the vendors who actually provide the software.  See [http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1010069 Tort Liability for Vendors of Insecure Software: Has the Time Finally Come?]
*** There was a case in torts Kline v. 1500 Mass. Ave. that found that landlords are responsible to take reasonable security measures to protect tenants from attacks in common areas that are primarily under their control, such as lobbies or hallways by providing, for example, working locks. Could this case be applied to require ISPs and companies hosting websites wherein users store their personal information to protect that information from hackers and other security breaches? --[[User:Alissa.d.R.|Alissa.d.R.]] 01:38, 27 September 2010 (UTC)
*Database insecurity:
** Summaries of a few recent cases that address database breaches: [http://www.sidley.com/files/News/97324419-8e7b-4c3b-93fa-166d4b2bafb3/Presentation/NewsAttachment/17f372d1-e3f6-4170-b914-37bb3d2d695b/PrivacyUpdate062609%25282%2529.pdf&sa=D&sntz=1&usg=AFQjCNGVbZLeoS0joQgDT7_gE5jF8w6ivg Developments in Data Breach Liability].
**Law professor Vincent R. Johnson argues that tort liability is an appropriate mechanism for creating incentives and managing risks associated with cybersecurity: [http://www.stmarytx.edu/law/pdf/Johnsoncyber.pdf Cybersecurity, Identity Theft, and the Limits of Tort Liability].  Some issues he raises:
***''Duty to protect information'': California's Security Breach Information Act imposes such a duty.  The obligations that Graham-Leach-Blilely imposes on financial institutions arguably support liability on a theory of negligence per se.
****Can market forces adequately address insufficient database security?
***"Duty to inform of security breaches": This could be analogous to a failure to warn theory of negligence liability.
***The economic harms rule seems to impose a significant bar to recovery.  What about requiring the database-owner to pay for security monitoring?  A risk-creation theory might support this approach.
--[[Special:Contributions/98.210.154.54|98.210.154.54]] 23:13, 21 September 2010 (UTC)Davis
'''2. WikiLeaks'''
Note: It seems to me, after reading the cybersecurity entry, that the "WikiLeaks" problem could be moved under that category and used as an example of how the Internet can magnify the consequences of a data breach in the physical world. Does anyone else agree? (I'd say no.  There are other issues I've added that make it distinct.  See below. -- Austin)
*Real-world data breach: Soldier suspected of leaking classified military reporters to whistleblower website WikiLeaks [http://abcnews.go.com/WN/wikileaks-case-pvt-bradley-manningss-alleged-role-leaking/story?id=11254454]
**This kind of leak is made much more likely by the growth of digital information.  Spending two minutes to copy thousands of records to a CD labeled "Lady Gaga" v. making copies of the Pentagon Papers and smuggling them out under your shirt.  Do we live in a more "leaky" age?  If Wikileaks proves capable of protecting the anonymity of its contributors, should we come to expect that any information that is sufficiently important to public discourse will eventually find its way into the wild?
***If these sorts of leaks become increasingly common, will there be a significant effect on the public's expectations as to government(s) transparency?  Will Wikileaks-like organizations become an accepted "unofficial" path to the release of information?  Or will an increased public expectation of transparency force less government secrecy (fewer documents classified, documents declassified sooner)?
***What effect will Wikileaks have on transparency in private entities with significant public impact?  Will the public or traditional press treat a leak from Monsanto the same way as a leak from the Department of Defense?
*WikiLeaks posts documents without redacting the names of Afghans who provided intelligence to the United States. [http://www.cbsnews.com/stories/2010/07/29/eveningnews/main6725935.shtml?tag=contentMain;contentBody] The Taliban said it was using the WikiLeaks site to comb for names of Afghan informants, while the traditional press/gatekeepers said they had redacted the documents they posted to avoid "jeopardizing the lives of informants." [http://thelede.blogs.nytimes.com/2010/07/30/taliban-study-wikileaks-to-hunt-informants/] It seems as if the Internet has allowed people to bypass the traditional gatekeepers -- be they the government or the press -- and that this magnified the effects of the real-world data breach.
**Wikileaks _did_ provide the Pentagon the opportunity to redact sensitive information from documents.  [http://www.newsweek.com/blogs/declassified/2010/08/20/wikileaks-lawyer-says-pentagon-has-been-given-codes-granting-access-to-unpublished-secret-documents.html]  The Pentagon refused.  To what degree should the Pentagon and other groups targeted by Wikileaks and similar organizations be willing to work with those organizations?  From the perspective of the targeted group, cooperation with such a group legitimizes it and increases its public statute.  From the perspective of the media organization, a working relationship with a targeted group may cause supporters to question the organization's independence.  Is there a balance to be struck here?  Do the formation of these relationships make "new media" organizations like Wikileaks look too similar to the "old media" organizations that depend greatly on relationships with government officials for content?
*Jurisdictional problems and prosecution: The U.S. government is able to prosecute the real-world leaker, but likely won't be able to prosecute WikiLeaks -- the organization that used the Internet to magnify the effects of the leak, etc. -- because of jurisdictional problems and because of a lack of on-point law. [http://blogs.wsj.com/law/2010/07/26/pentagon-papers-ii-on-wikileaks-and-the-first-amendment/]
--JPaul (Jenny)
*What is the role of the primary source in our media landscape?  The Wikileaks hasn't quite figured this out yet.  In the Collateral Murder video release, for example, Wikileaks was criticized for releasing an edited video alongside the raw footage from a helicopter attack on journalists and civilians in Afghanistan.  In the War Diaries release, however, Wikileaks received sharp criticism as described above.  What is the role of the "gatekeeper" in today's media?  Can primary source documents contribute effectively to discourse or do they provide so much information that a reader cannot process it all?  Would the War Diaries release had greater focus if the vast number of reports had been reviewed by crowdsourcing?
*How will Wikileaks shift public opinion with regard to the use of anonymous sources?  As some media critics have pointed out, traditional media sources, even while criticizing anonymous or pseudonymous bloggers, rely heavily on the statements of anonymous officials in their reporting.  [http://www.salon.com/news/opinion/glenn_greenwald/2010/07/24/anonymity]  Will there be a backlash against anonymity or will it become more widely accepted?  Is there a distinction to be made between anonymous sources and anonymous reporters?  How do trust and reputation fit into all this?  (this issue fits pretty well into Group 1's anonymity issue above)
*Can the Wikileaks model of funding (releasing a big story and then relying upon donations) be a more widely applied to other forms of independent journalism?
'''Random Tangent Thought:'''
*  Can a lot of these issues be thought of as issues of access?  Cyber can be seen as basically lowering the barriers to access that exist in physical reality.
** A lot of the privacy concerns can be reframed as an issue of control over access — who has access to your facebook photos?  Your search queries?  Wikileaks documents? And for what purpose?
** But access to information is largely what makes it such a powerful communication mechanism, which ties into access on the macro instead of micro level: who (internationally, domestically, economically) has access to cyber?
** The aggregation of user information (facebook social graph, google trends, etc.) could be extremely valuable to the public, and denying anyone the ability to access it could be inhibiting the pursuit of knowledge about ourselves. 
*** But who gets to access such aggregated information?  The companies that set up the platforms through which the public expresses itself, or is it some sort of inherently-public property?
* Tying it back to Lessig, is there a way to take the ideas that we develop about access to information in the cyber world and bring them back into the physical world?


== '''Group Four:''' ==
== '''Group Four:''' ==
- '''The legal or regulatory meaning of Net Neutrality Principle'''
* According to Lessig, there is no fixed architecture for the Internet - they are all "code" written by human. The Net Neutrality principle and Prof Zittrain's concept of a "generative Internet" are attempts to lay down some fundamental values of what Internet architecture, given all possibilities, is the most desirable model that we should pursue.
* Suppose the concept of Net Neutrality is clear (which is actually not), what are the practical regulatory implications of this principle should bring? There are seemingly contradictory practices in the name of Net Neutrality. For example, ISPs prohibit P2P file-sharing, alleging it takes up too much band width and other users are able to use the networks equally. These practical issues remind us of rethinking some fundamental problems on what neutrality means to the Internet.
* Layers in regulations that protects Net Neutrality: Are there certain applications that should be given high priorities to run on the Internet, or should all applications be given the same weight? (For instance, should our policy equates the band width used for emails and web browsing to high-resolution video or gaming?)


-“to what extent is our judgment about tech related to the “coolness” of the tech itself?”
-“to what extent is our judgment about tech related to the “coolness” of the tech itself?”
Line 66: Line 245:
*** ''Profitability.'' The iPhone's closed platform provides at least two valuable and related benefits. First, it allows Apple to keep its operating environment "safe." Without unauthorized third-party applications--i.e., with all apps being Apple-approved--there is less risk for the introduction and dissemination of malware. This reduces costs for Apple, which doesn't have to respond to consumers whose phones have been destroyed by viruses. A second related benefit is branding. Because Apple can keep its system closed, it can design the environment in which it operates and market that environment as a product. This design means Apple can extract profits form third-party apps by conditioning access upon, among other things, payment. It also makes the company more profitable because Apple can advertise and promote itself as a "safe" place that operates seamlessly. Nevertheless, this raises issues about how far Apple will regulate its platform. Will it simply condition access by third-party applications, or will it go further and monitor its users. If Jobs is concerned that users will upload pornographic pictures on his phone, will the future iPhone be programmed to identify automatically and remove or block such photos? Does Jobs' vision relate to profitability, or simply personal preference? (This last question will be relevant to considering user satisfaction).  
*** ''Profitability.'' The iPhone's closed platform provides at least two valuable and related benefits. First, it allows Apple to keep its operating environment "safe." Without unauthorized third-party applications--i.e., with all apps being Apple-approved--there is less risk for the introduction and dissemination of malware. This reduces costs for Apple, which doesn't have to respond to consumers whose phones have been destroyed by viruses. A second related benefit is branding. Because Apple can keep its system closed, it can design the environment in which it operates and market that environment as a product. This design means Apple can extract profits form third-party apps by conditioning access upon, among other things, payment. It also makes the company more profitable because Apple can advertise and promote itself as a "safe" place that operates seamlessly. Nevertheless, this raises issues about how far Apple will regulate its platform. Will it simply condition access by third-party applications, or will it go further and monitor its users. If Jobs is concerned that users will upload pornographic pictures on his phone, will the future iPhone be programmed to identify automatically and remove or block such photos? Does Jobs' vision relate to profitability, or simply personal preference? (This last question will be relevant to considering user satisfaction).  
*** ''User Satisfaction.'' For most users, the iPhone's closed platform doesn't seem to cause any immediate problems. There are plenty of cool apps that individuals can download and use. The iPhone certainly scores high on aesthetics, even if some of its features are low on performance. Users tend to love aesthetics, and have overlooked the fact that, for instance, the iPhone can run only one program at a time. The closed platform's safety also provides a benefit to users, who don't have to worry about protecting their phones from malware. So far, user satisfaction is high. The balance between user satisfaction and profitability seems to be in equipoise--for now. The question for the future is whether Apple will close off more territory, and whether its current sectioning will stifle the actions of users in the future. As to the former, Apple might meet substantial resistance from the public if it begins regulating their private behavior more explicitly. As to the latter, the future is hard to predict. If users become more adept with their phones or demand new features that the closed system stifles, Apple may have to modify just 'how' closed its system should be. Of course, it may respond by making even "cooler" design, thereby satisfying users sufficiently to distract attention from the new (or old) restrictions that remain in place. If consumers detect that Jobs' personal preferences are dictating the ways they can use their phones, their dissatisfaction may win the day.
*** ''User Satisfaction.'' For most users, the iPhone's closed platform doesn't seem to cause any immediate problems. There are plenty of cool apps that individuals can download and use. The iPhone certainly scores high on aesthetics, even if some of its features are low on performance. Users tend to love aesthetics, and have overlooked the fact that, for instance, the iPhone can run only one program at a time. The closed platform's safety also provides a benefit to users, who don't have to worry about protecting their phones from malware. So far, user satisfaction is high. The balance between user satisfaction and profitability seems to be in equipoise--for now. The question for the future is whether Apple will close off more territory, and whether its current sectioning will stifle the actions of users in the future. As to the former, Apple might meet substantial resistance from the public if it begins regulating their private behavior more explicitly. As to the latter, the future is hard to predict. If users become more adept with their phones or demand new features that the closed system stifles, Apple may have to modify just 'how' closed its system should be. Of course, it may respond by making even "cooler" design, thereby satisfying users sufficiently to distract attention from the new (or old) restrictions that remain in place. If consumers detect that Jobs' personal preferences are dictating the ways they can use their phones, their dissatisfaction may win the day.
**[''Please add another example.'']
**"Pandora Hour Limits." Pandora's 40-hour (not sure if that is the exact number, but the important part is that there is a limit) limit for free users has had an impact for avid-users, taking away from their satisfaction.
*** "Profitability." The only way to take advantage of a freemium model of revenue is to provide users with more incentive to go premium rather than the non display of ads. Users seem to be satisfied with this because of the loophole of just creating new accounts, however, this is also a process not liked by consumers.
*** "User Satisfaction." Often times unnoticed, not causing immediate problems. There are plenty ways users have gotten around this restriction, especially by just creating a new account that requires only an email address.
 
'''Lack of Humans in Online Transactions'''
 
The process of purchasing something online has almost become too easy for users and is a process that is generally irreversible especially for a website like Ebay. Amazon introduced one-click purchasing where, with the input of a single word, your credit card is charge and the item is shipped. There is no human contact on the receiving end of a transaction, leading to a significant amount of error and non-intended expenditure. More human contact is needed and the process needs to be slowed down to ensure privacy and accuracy.


-online transaction speed: feature or bug?
*online transaction speed: feature or bug?
-lack of humans in online transactions: feature or bug?
*lack of humans in online transactions: feature or bug?
- Computers and people gone wild! (please don’t google this)
** We seem to be moving in a direction where the ability to engage with a human who works at a commercial business significantly decreases but the ability to communicate with other customers has significantly increases.  This is a dramatic shift from how relationships with businesses previously existed -- one would easily be able to find the proprietor of a business but would never be able to interact with the customers that came before and after.  What does this mean for the way we make purchases?  Does this change the nature of "the customer" as viewed by employers?
**  There are websites dedicated to helping users get access to human support.[http://gethuman.com/]
* Computers and people gone wild! (please don’t google this)


- Should everything be open-source?
- Should everything be open-source?
* A closed platform means that things can be innovative only within a predetermined limit; that is, we can only work within the realm of the expected (e.g., apps for the iPhone). But some of the greatest innovations have changed the paradigm for innovation completely, the obvious example being the Internet. The cost of closed platforms is that we do not even know what we're missing -- are security and cool apps worth it?
* A closed platform means that things can be innovative only within a predetermined limit; that is, we can only work within the realm of the expected (e.g., apps for the iPhone). But some of the greatest innovations have changed the paradigm for innovation completely, the obvious example being the Internet. The cost of closed platforms is that we do not even know what we're missing -- are security and cool apps worth it?
** Alternatively, if everything ''were'' open-source, would we face some variant of the tragedy of the commons? (''Tragedy of the commons'' -- In ye olde England, there was a public commons where everyone could let their cattle graze. But because it was a public space, no one took responsibility for it, so all the grass ran out and the place was a mess. Then the commons was privatized, and lo and behold, private ownership meant that the owner now had an investment and interest in the land, so the land became nice and green again. Even if the owner now charged people to let their cattle graze there. [http://en.wikipedia.org/wiki/Tragedy_of_the_commons])  Or is there something different about the ethos of the Internet, or about cyberspace as a ''space'', that makes the tragedy of the commons a non-issue?
** Alternatively, if everything ''were'' open-source, would we face some variant of the tragedy of the commons? (''Tragedy of the commons'' -- In ye olde England, there was a public commons where everyone could let their cattle graze. But because it was a public space, no one took responsibility for it, so all the grass ran out and the place was a mess. Then the commons was privatized, and lo and behold, private ownership meant that the owner now had an investment and interest in the land, so the land became nice and green again. Even if the owner now charged people to let their cattle graze there. [http://en.wikipedia.org/wiki/Tragedy_of_the_commons])  Or is there something different about the ethos of the Internet, or about cyberspace as a ''space'', that makes the tragedy of the commons a non-issue?

Latest revision as of 15:35, 4 October 2010

Group One:

-Identity revealed beyond your comfort zone (ex. WOW message boards: forced real identity). -Can online identity be protected as a possession? Who owns profile pages? -Data portability as a privacy policy (who owns shared data?)(single sign-in)(facebook Connect)(OpenID)(persistent identity online). -Cyberbullies, multiple identities online. -How/can IRL ethics/morality be imposed in online spaces? Should they be?

1. The Right to Speak Anonymously

  • It would seem that the easiest way to impose IRL ethics/morality in online spaces is to make our online identities tied more closely to our 'real' identities. But at the extreme, with everyone having a single, unique online identity tied to something like Social Security numbers, we would be sacrificing our right to speak and act anonymously online. Is there a happy medium?
    • This may prevent multiple people from speaking under one name (i.e. NYU students combining to create Mr. Bungle in Julian Dibbell’s “A Rape in Cyberspace”). How would this work with Twitter accounts that were maintained by multiple people or corporations?
    • Perhaps a happy medium could exist if there were a way to impose social norms on pseudononymous users without revealing their real space identity or requiring a unique portable identity. If users are required to use the same pseudonym on a particular site their reputation on that site is tied to their pseudonym. If the website compiled a list of all that users activities and reviews the user receives from other users, his reputation is tangible. Think eBay. Users on eBay do not often use their real identities, but they develop a palpable reputation through their interactions with other users. They are reviewed after each transaction. Though some have argued that eBay is unique because it is a commercial site, the reputation building mechanism is no longer a product of just eBay. As someone posted below, Xbox Live now has its own mechanism for reputation building through user provided feedback. Other users can access this information and decide whether they want to play with a random user name they cannot identify. If other websites/ online social networks required users to have a single pseudonym and put an emphasis on reputation building on the space, the same norms could develop. ---Alissa.d.R. 00:38, 27 September 2010 (UTC)
      • A way of enforcing those norms is by putting those individuals that violate community standards on a digital scaffold. Virtual worlds in the past have attempted to curtail undesirable behavior in their communities by punishing users in a visible fashion. For example, one virtual world put users that violated its terms and conditions in a virtual jail. Other worlds, such as Second Life, have created a quasi-board of users that review violations and make a determination. Complaints are publicly displayed on a related site, and the way in which they were addressed is also listed. The mere appearance that someone on the space cares about violations and the fact that they do not go unnoticed enhances compliance to norms. --Alissa.d.R. 00:39, 27 September 2010 (UTC)

A. Cyber-bullying and Cyber-rape?

  • Generally: should people be held accountable for online behavior that causes real world effects? These effects are typically emotional harm, but victims can also be so affected that they cause physical harm to themselves.
  • It's likely that with cyberbullying, for example, that the kids being bullied know exactly who their antagonists are, meaning that anonymity is not at the heart of the problem. So what is? Is it a lack of consequences? Or consequences that, because they are in 'real' life, are insufficiently tied to their online behavior?
    • Although in some cases, such as the highly publicized Megan Meier case, it was not known who the cyberbully was. In fact, in that case, the "real world" bully was unknown in large part because the profile was of a fictitious person to mask the real identity of the bullies.
    • Perhaps we should split it into two categories -- "real name" bullying and "masked" bullying. In both of these categories, the lack of real world (adult) supervision or serious consequences (so far) could be fueling the bullying.
      • "Real name" bullying usually takes place on sites that encourage users to associate their online and real world identities.
        • E.g. sites like Facebook and MySpace (the Meier case notwithstanding) and instant messaging systems (AIM, gchat). Bullying on these sites is often highly concentrated among adolescents who have migrated traditional bullying onto the Internet.
        • Because the real world identities of these bullies are often easily identifiable, some mode of authoritative supervision could calm this type of bullying.
        • (Zara) The recent suicide of Tyler Clementi illustrates that online harassment can also occur when someone's personal information is publicized to the world against their knowledge or consent, not just through private, individually targeted chats. Public humiliation also has its real-world bullying counterpart, but the opportunities for bullies to broadcast humiliating information to millions of a victim's peers magnifies the problem. Someone pointed out that many issues in cyberlaw are centered around issues of access. Although issues of access and control are present in any discussion of the bundle of rights in real property, like issues in crowdsourcing, these issues are particularly salient in the internet domain. It has been argued that the crimes of harassment and invasion of privacy are adequate to protect against cyber-bullying, but it seems to me that statutes specifically tailored to punish cyber-bullying help reinforce the message that this kind of conduct is morally and legally unacceptable, which in turn helps bolster educational campaigns against it. The bully's level of culpability for unintended effects is also an issue, as the public demands criminal punishment in these horrific cases. [1]
          • In some cases, however, such as the highly publicized Megan Meier case, the cyberbully was unknown. In fact, in that case, the "real world" bully was unknown in large part because the profile was of a fictitious person to mask the real identity of the bullies.
          • What if someone has an online persona that looks like it may be real-world, but is really an elaborate satire? (e.g. Stephen Colbert’s Twitter account) If one of those accounts was to satirically “bully” someone, and they didn’t get the joke, how would that fit in here?
            • This might fall under the traditional reasonable person standard. If a reasonable person would have known it was satire, then that would be enough. Still, one can imagine a scenario where, despite knowing that it is satirical, the "bullying" conduct still causes the harms sought to be avoided.
      • On the other hand, "masked" bullying occurs when bullies are anonymous posters who usually hide behind pseudonyms not connected to their legal names.
        • E.g. JuicyCampus and CollegeACB (see http://www.collegeacb.com/ and a corresponding article: http://ksusentinel.com/arts-living/students-become-source-of-anonymous-bullying/ ), or message boards like World of Warcraft and LambdaMOO.
        • An ongoing case of "masked bullying" is the trial of William Melchert-Dinke, who has been criminally charged with 2 counts of assisting suicide -- using a female alias on a suicide-related message board, he allegedly convinced several people to kill themselves. The trial in ongoing. He made a confession to police but now claims he did not know that encouraging suicide was unlawful. New York Times article. Is the language Melchert-Dinkel used in online forums encouraging suicide protected by the First Amendment?
        • World of Warcraft Real Names Controversy
            • Cisco article describing Blizzard's proposed changes to its message board identification system.
            • Blizzard's response to the controversy the proposal created. In this message board post, Blizzard gives up its plan.
            • Blizzard's proposal was tied to its Real ID plan, which is an interesting way to bring real world identity to the online realm in an unobtrusive manner.
        • Here, it may be harder to have any notion of supervision, because the sites themselves facilitate and promote anonymity and likely will invoke free speech claims to avoid working with regulators. -- JPaul (Jenny)
      • Can we distinguish sites that encourage anonymous posting because they want a wide variety of opinions and want to promote general speech from sites like Juicy Campus that used and assured anonymity to its users to elicit defamatory speech. The Communications Decency Act protects ISPs from suits for the content posted by third parties on their sites, but if they are eliciting a certain type of speech, shouldn't they lose their immunity? --Alissa.d.R. 01:38, 27 September 2010 (UTC)
  • Concerning cyber-rape, there are already video games that have been sold such as Japan's controversial Rape-Lay, where the object of the game is to sexually assault women in various ways. While the game does not have an online component, increasingly video games are being sold with such online interactivity. Moreover, cyber-rape has apparently occurred in online community games such as Second Life.

B. Free Speech

  • Is the Right to Speak Anonymously (or even the Right to Freedom of Expression?) harmed by forcing online users to sign in with "real identities" (achieved by requiring either verified credit-card billing names, or in less strict cases Facebook Connect) to leave comments on newspaper websites rather than leaving them anonymously as common practice in the early days of the Internet
    • Difficult to analogize to traditional 'letters to the editor' in print editions because it was much more difficult - if not impossible - to conduct a search of all the comments a given person had submitted to newspapers. Today, however, such a search would allow anyone to quickly pull together a portfolio of comments left by a given person across multiple publications.
    • If there is a right to speak anonymously, what are its limits? Are third parties required to reveal real identities when necessary to name a party to a lawsuit? What showing of harm to another person might be required? Under what circumstances are subpoenas of Internet service providers legal? It seems to me that the First Amendment justifications for anonymity are lacking in cases that cause harm to others, like cyberbullying, such that any right is extinguished. How absolute is this right?

C. What about people who try to keep RL and Internet personas separate?

  • While it is true that online identities have been increasingly merging with real-world identities through the use of real names, photographs, etc. on Facebook and similar sites, this is not always the case. Some Internet users seek to keep their real-world actions (part of their real-world identity) off the Internet.
    • E.g. On WoW and XBOX Live (among others), some people seek to maintain fictitious or anonymous online identities.
  • A recent Supreme Court decision in Doe v. Reed, however, considered when real-world acts are publicized, and importantly for our purposes publicized on the Internet. In that case, petition signers hoping to repeal a Washington state law that granted same-sex couples rights akin to marriage sought to prohibit others from gaining access to the petitions under the First Amendment. Opponents of the petitions intended to put the names and addresses on the petitions online in a searchable format. The Supreme Court held that public disclosures of referendum petition signers and their addresses, on its face, did not present a First Amendment violation.


2. What is the effectiveness of a single, unique online identity?

  • Possible case study: Microsoft's XBOX Live online service
    • Microsoft's XBOX Live services assign users a unique online username that identifies the user across all the games and services offered by XBOX Live. This unified identity allows a user to easily maintain relationships with other users, compare past accomplishments and activities, and effectively establish an online community.
    • From a Lessig framework, Microsoft has used 3 of the 4 regulators to motivate people to become attached to one identity and work hard to preserve its reputation.
      • Norms: Microsoft allows users to rate other users and assign positive and negative feedback. Other users can easily access this information and determine if this is someone they want to associate with. A user's accomplishments and stats from playing games are tied to his unique identity, making it a valuable indicator of skill and status in the online community.
      • Market: In order to acquire an XBOX Live account, a user must pay $60 for a year-long subscription. A subscription only gives a user access to one username and, therefore, one identity. If someone wanted to create another identity or if Microsoft banned a user from XBOX Live, that user would have to pay for another account.
      • Architecture: Microsoft has built in the rating system listed above. As a closed platform, Microsoft also has the ability to ban a user from online activities. This would force the user to purchase another account and would prevent that user from associating himself or herself with past accomplishments and reputation. Given this ominous power, one should be extra careful not to do anything to warrant banning.
      • Laws: nothing outside of normal tort laws
    • Result: this requires more research and testing. However, common wisdom (note: this is from my own personal experience has someone who has played online and has read many opinions about the service) is that communication on XBOX Live is a morass of racist, sexist, and violent comments. Many individuals refuse to communicate online anymore. Despite all of Microsoft's safeguards, there is not an effective deterrent to this type of behavior.
  • Are entities like Microsoft hampered by the fact that these online identities, in a sense, don't matter? If I have a unique XBOX Live identity, how am I harmed outside of the XBOX Live community if I act poorly online and am banned? This won't harm my relationships outside of this online realm or ability to get jobs.

A. Do we need to have more "real world" ramifications for online activity?

  • For example, if law firms required me to list my XBOX Live account name, my Facebook account URL, and my Twitter name (and required me to make all of them public), that would greatly change how I act online. Is the notion of an online identity affected by OCS telling all students seeking law firm positions to make their Facebook profiles as secret as possible?
    • Along the same lines, is it going to be possible for future generations to truly maintain their private and professional lives separate or has their use of the Internet from an early age necessarily made that nearly impossible. Can employers justify hiring or promotion decisions based on observations made outside of the work place that have been gathered online. --Alissa.d.R. 01:38, 27 September 2010 (UTC)

B. How does requiring a persistent identity mesh with policies behind law?

  • Minors may have opportunity to expunge or seal criminal records under the concept of learning and youthful mistakes. However, this is a system completely controlled by the government. Given the nature of the Internet, it may be impossible to offer a similar service, as website could continue to cite the damages caused by an online user, who is easily traceable to a real world individual.
  • If we could expunge a minor's record, do we want to? For security reasons, we may not want minors to be identified as such online. Therefore, people will treat an online identity that belongs to a minor as if it belonged to an adult. Similar to minor being held to adult standards when participating in adult activities under tort law, should we hold minors to adult standards if they are perceived as adults in the online realm (which would include not allowing their record to be erased)?
    • Google CEO Eric Schmidt predicts, according to an August 4, 2010 interview in the WSJ, that "every young person one day will be entitled automatically to change his or her name on reaching adulthood in order to disown youthful hijinks stored on their friends' social media sites."
    • However, allowing someone upon turning 18 to disown "youthful hijinks" promotes a culture that separates consequences from actions. Instead of eliminating the past, why don't we provide it with more context?
      • As a proposal, why don't we use the architecture/law prongs of the Lessig test to create a structure in which the online activities of an individual, from his first entry into the online world to the last, are stored on a server (this is extremely big brother-ish but let's just play this out). The user can establish as many identities online to represent themselves to other uses, but all of these identities are connected to the user's real world identity. All of the actions a user takes as a minor are branded as the actions of a minor. The way we would then use this information would be similar to a background or credit check. Employers looking to hire the user can request a report on his Internet activity. They would then receive a report that details his actions. This could either be exhaustive, a general overview, or just if others have complained about his actions. This report will indicate what the user did and when in his lifetime he did it.
      • Therefore, if the user did something embarassing or bad, this report will provide more context than a mere Google search.
  • Should the notion of an online identity include search results? If so, how should this information be controlled? Wikis indicate when an entry is in dispute. Could search engines provide a similar service for websites attacking individuals? Is it technically possible to screen for negative publicity and separate out these websites in a search?

D. Reputation Bankruptcy?

  • In an article on his blog, Professor Zittrain discusses the concept of reputation bankruptcy.
    • I find the tie between the solution to identity issues and bankruptcy to be troubling. First off, one of the major problems with online reputation is not a completely toxic reputation, but a singular occasion of poor discretion or the malicious agendas of a few individuals.
      • E.g. In the AutoAdmit case, users of a law school message board maliciously attacked a Yale Law student. These negative posts were among the first page of hits in a Google search of the student's name. The student received no offers from law firms and although no causal connection was proved, these negative posts could have been a reason why she did not receive offers. In cases like these, individuals need selective erasing of third party attacks or a past mistake that are unfair restrictions on a promising future. However, Professor Zittrain suggests that reputation bankruptcy could involve wiping everything clean, "throwing out the good along with the bad." The new start of bankruptcy is a dramatic change and should be used for dire situations. For many cases, it may be a case of using a sledgehammer to kill a fly.
    • Another concern is similar to the one I proposed to the idea of "disown[ing] youthful hijinks." If someone can declare reputation bankruptcy, that individual is saved but the communities he or she was a part of are still scarred by that individual's actions. If, in the aforementioned XBOX Live case study, a user could declare reputational bankruptcy, that would do nothing to help XBOX Live's negative reputation and people's unwillingness to engage in that community. Under bankruptcy, creditors are able to receive some amount of compensation for the debts owed to them. What kind of compensation can online communities receive in reputation bankruptcy?
  • How do we teach youths just entering the online world how to interact with it and maintain a praiseworthy identity? [WSJ


3(a). Facebook Profile Portability

Let's do more research into data portability as a privacy policy, which relates to above. Facebook could be a good case study. What options and protections are there to port an online identity / profile i.e. Facebook messages, friend listings, and wall-postings? What can and can not be permanently deleted?

  • It has been argued that Facebook has created a "semipublic" shared space for exchange of information. NYT If I send a private message or make a post on my wall, such information would be owned by me. But what about posts made by other people on my wall, or pictures and video I have been tagged in?
    • What happens to this shared data if I close my account?
    • Should information uploaded by other people become part of my online identity? If I look at my Facebook wall I can see that only a minor part of it is made of my own contribution. Would my online identity be the same without other people's contributions?
    • And what about my posts on other people's wall? Are those part of my online identity? If we own our personal information, should we own also our posts on other people's walls?
    • Let's assume we have complete online portability of our online identity, including material submitted by third parties: what are the privacy implication of this from a third party's perspective? Are we ok with the third party's posts and tagged pictures being transferred? Should the third party be notified? Should the third party give express consent?
  • (From Erin:) I know that both Facebook and Google joined DataPortabillity.org. I think that a lot of blogging software (livejournal, typepad, wordpress, blogger, etc.) have been at the forefront of allowing pretty easy portability between systems. So facebook is probably the more interesting case study in terms of a "difficult problem" — particularly as it relates to third party information — but it may make sense to look at these as a start (esp the analogy to third party comments on your blog)...


3(b). General Online Identity Portability

While 3(a) focuses on Facebook, how about tackling the more general question about our online identities:

  • How should online identity be regulated?
    • Lessig's framework, incl. benefit analysis of potential laws, norms, market mechanisms?
      • re laws: 3 privacy laws passed at state level, federal privacy law potentially coming in next Congress - what will (should?) it include regarding online identity?
      • re norms: self-regulation: could talk about OAuth, OpenID here and what role they could play
      • re market place: thinking about Google's leaked internal communication from 2008 about creating a market place for privacy/personal data)? See: [[2]]
  • What frameworks / initiatives do currently exist? Who has, should have control - Government vs private sector (Could Facebook be the personal online ID provider)?
    • NSTIC (National Strategy for Trusted Identities in Cyberspace) [[3]], first draft strategy document published in June 2010, available for download on homepage
      • Goal of initiative: Identify solutions ensuring (1) Privacy, (2) Security, (3) Interoperability, (4) Ease-of-Use
      • Key roles and perspectives to analyze: Individuals, Identity Providers, Attribute Providers, Relying Party
      • (Source: NSTIC Presentation, Trusted Identities Panel, OTA Online Security and Cybersecurity Forum, Washington D.C., September 24, 2010) --Reinsberg 19:33, 26 September 2010 (UTC)
      • Vision proposed by NSTIC: "An individual voluntarily requests a smart identity card from her home state. The individual chooses to use the card to authenticate herself for a variety of online services, including: Credit card purchases; Online banking; Accessing electronic health care records; Securely accessing her personal laptop computer; Anonymously posting blog entries; and Logging onto Internet email services using a pseudonym."


4. Applying Privacy Policies Worldwide

What are the challenges social networks face at the international level and in countries other than the US?

  • Are privacy policies adopted by social networks enforceable everywhere?
    • Consider Facebook approach: Facebook adheres to the Safe Harbor framework developed between US and EU as regards the processing of users' personal information. Safe Harbor Is this enough to shield Facebook from privacy claims coming from outside the US? What about countries outside the UE?
    • Should Facebook be concerned at all about its international responsibility? Consider the case of the Google executives convicted in Italy for a breach of privacy legislation. Assuming the conviction is upheld in appeal, can it ever be enforced? Where are the offices of the company? Where are the servers? Where are the data actually stored and processed?
  • More generally, what types of information created by users is 'personal data' about which they have/should have a reasonable expectation of privacy and should be subject to regulation?
    • The line between personal information about which people have a reasonable expectation of privacy and information that is not personal and that need not have restrictions relating to privacy can be a difficult one to define. For example, is information about how a driver drives a car that gets recorded on an in-car computer and potentially transmitted to a car rental or the car manufacturer 'personal' information that is/should be covered by data protection laws? What about information that is picked up by Google when taking images for Google Street View (e.g. IP addresses of neighbouring properties)? (See discussion in Information Commissioner’s Office (UK), Statement on Google Street View, August 2010). The problem is that in many cases this information on its own does not identify a particular individual but that it could be used in combination with other information to identify people. Yet when we use the Internet so much information is created and it may not all be information that should be subject to privacy regulation. See discussion about this problem in a New Zealand context in Review of the Privacy Act 1993, NZLC 17, Wellington, 2010 (Australia and the UK are considering similar issues).
  • Who/what are the main sources of privacy invasion that people anticipate? Is it the same privacy invasion if it comes from a company (Google), a government looking for potential terrorist activity, or just an acquaintance who likes to Facebook-stalk others?
    • Do we hold private companies to a higher standard if we know that they have the means to protect our privacy more? Should such companies be held to the same standard in every country? If not, aren't there problems with information that is accessible in some countries but not in others?


5. Cyber-security

  • Cyber-space was first used by script-kiddies as a playground for web defacement, etc, then discovered by criminals as a new means to expand their activity followed by transnational crime syndicates, followed by hackers with a political - "hacktivists" - until eventually also government discovered cyber-space. Since the DDoS attacks on Estonian websites in 2007 pushed the issue in NATO circles, cyber-security has been increasingly in the headlines. A number of questions emerge from this:
  • Real threat vs. threat inflation. How much of the what is written in newspaper articles and books is much ado about nothing and what can be considered a real risk? If there is a risk, is there also a threat? What determines what constitutes a threat? Richard Clarke's book "Cyber-war" paints a gloomy picture. Self-interest by an author working as a cyber-security consultant or is there more to it?
  • Cyber-crime <-> cyber-espionage <-> cyber-hacktivism <-> cyber-terrorism <-> cyber-war (cyber-intrastate war/cyber-interstate war). Costs today? Costs tomorrow? Technical solutions? Policy/legal solutions? National/international level? State vs non-state actors? Public/private?
  • Cyber-war vs. cyber-peace. Why is much of the literature using language such as "cyber-war", "cyber-attack", etc and not language such as "cyber-peace", "cyber-cooperation"
  • Terminology. What is the difference between a cyber-hacktivist and a cyber-terrorist? What constitutes a "cyber-attack"? Given cyber-space's virtual borderlessness is it appropriate to speak of defense/offense or active/passive (e.g. the Outer Space convention)? Is cyber-space a territorium like the High Seas, Antarctica or Outer Space? Or a new field after land (army), sea (navy), air (air force), cyber? Is cyberspace a "cultural heritage of mankind"? Relationship between virtual and kinetic.
  • Civilian vs military. How is cyber-security changing the relationship between civilian and military? DoD is responsible to defend .mil, DHS responsible to defend .gov. What about the other domains? The German DoD is responsible to defend the German military network, the Ministry of Interior responsible for the government websites. How do civilian Ministries of Interior with police forces relate to a cyber-attack outside the country usually an international attack being the responsibility of the military branch of a democratic government? What are the lines of authority, e.g. for the planting of logic bombs or trapdoors?
    • What would the authority of the military be in addressing attacks on civilian networks, if any? Does the government have a role or responsibility to address non-government networks? Structurally and legally, how would implementing this role be done? Would there be any problems of privacy protection and government over-interference?
  • If the government is going to take a role in strengthening private network security, which networks should it protect first? Who should be involved in the oversight of this protection--military, civilian gov. actors, private actors?
    • Would these actions fall under the Cyber Command?
    • The New York Times reports that "the new commander of the military’s cyberwarfare operations is advocating the creation of a separate, secure computer network to protect civilian government agencies and critical industries like the nation’s power grid against attacks mounted over the Internet." [4]
      • Are "secure zones" a viable solution to protecting critical infrastructure? Is this an oversimplified vision of secure systems that assumes cybersecurity is analogous to real space? What are the drawbacks to this approach? The article notes that the cyberwarfare commander did not demarcate the line between public and restricted government access.
  • Role of private actors. How are ISPs, hardware and software companies integrated into the discussions/policy-/law-making process? How much power do they have? Allegiance to profit? Allegiance to country? Allegiance to open cyber-space? Are there public private partnerships? Do they work? What are their strengths/weaknesses?
  • Role of hackers. In the early days, the battle was government vs. hacker or state vs. hacker guided by a hacker ethics. This was before the internet expanded around the globe and in the Western tradition of state vs individual. After the expansion, how has this relationship changed? Is there a transnational hacker-culture or are hackers of country X more closely aligned with government of country X vs hackers of country Y more closely aligned with government of country Y rather than hackers of X and Y aligned vs governments of X and Y?
  • With the attribution problem and the transition problem (virtual-physical world) how much security is necessary and how much generativity possible? What can be done to reduce the risk? What can be done to reduce the threat? International convention? Code of conduct among major companies? International confidence-building measures?
  • Enforcement. How could an international regime/agency look like solving the security dilemma? A cyber-IAEA? Or could a regime that exists now (such as NATO) be more effective?
    • What responsibility would countries have for hackers / attacks originating from their own countries? How could one separate attacks from a private individual (that a country could disaffirm responsibility from) and from a government-sponsored initiative?
    • What are the main sources of the threats that the US and other countries are anticipating? Are they state-based? How, if at all, would this affect our relationships and foreign relations with those states?
    • What kind of retaliation would be appropriate, once an attack has been discovered? If necessary, would a country engage in counter-cyber attacks, or more traditional retaliation such as economic sanctions or even military action?
    • Does the US have special responsibilities for a global safe and free Internet? Should it take the lead in preventing attacks in other countries that are less equipped to protect themselves?

Group Two:

-property -online things acquiring IRL value -what happens to digital possessions after death? -who has access to your accounts (fb, twit, gmail, etc) after death -(TOS after death) -first sale doctrine in software -first amendment rights with online comms (going through someone’s infrastructure)

1. Death and digital property

  • What happens to your virtual possessions when you die? Who owns your various accounts (Facebook, Google, WoW, etc)? Do the terms of service, unread by everyone, dictate post-mortem property rights? Would the ToS be enforceable if an heir brings an action to recover a deceased's virtual possessions? Should companies be required to maintain a user's data (and for how long?) after they die until an heir comes forward to claim or discard it?
  • At the moment most virtual possessions are purely sentimental in value. But we can imagine a time in the not-to-distant future in which a virtual possession has significant real world value. For example, someone with a large and devoted twitter following could monetize their account by licensing its usage to an advertiser. Suppose someone, let's call her Martha S., decides to use her twitter feed as a source of revenue in this manner and develops it into a brand. Upon Martha's death, what rights do her heirs have relative to twitter as to the use of her brand? Furthermore, if Martha had entered into long-term licensing deals regarding her twitter brand and dies suddenly, where does that leave her licensees? Certainly these issues will only get more complicated as the Internet becomes more complex and the value of virtual possessions increases.
  • The fundamental question is who owns a property that is created by a user in a private party's virtual sandbox? The user? The company? Or should there be some sort of co-ownership/co-authorship?
  • Ars Technica article on how Facebook, MySpace, Twitter, and Google subsidiaries treat death.
  • Whole set of websites attempting to address this issue, and allow some users autonomy to make decisions about what happens to their accounts after they are unable to manage them anymore:
    • Legacy Locker "is a safe, secure repository for your vital digital property that lets you grant access to online assets for friends and loved ones in the event of loss, death, or disability."
    • My Webwill "allows you to make decisions about your online life after death. You can choose to deactivate, change or transfer your accounts, like Twitter, Facebook or your blog. At the time of your death we perform your wishes."
    • But given the transience and uncertain future of so many of these companies, can someone really interested in this trust that the sites will still be around? Do you need to keep updating it for every new service that you sign up for which, over the course of several years, will presumably include several different sites? Should it either be a more centralized solution (say, a service provided by the government that has a feeling of more permanence) or more decentralized (say, a last will and testament that you can basically draft on your own).

2. Other disconnected thoughts on digital property and avatars

  • Nature of Property Rights
    • Related to the above discussion of Martha S., can a user license the use/image of their account/avatar to third parties? Does the website own the avatar? Do users have a right to publicity for the use of their account or avatar? I.e. can Second Life take John Doe's avatar and use it to sell virtual Sham-Wows without Doe's consent?
    • Can you have a copyright in your avatar? In many instances they could count as pictorial, graphic or sculptural works. Is/should the copyright be shared with the owner of the software that creates the virtual world? How should these rights be balanced?
    • One objection to copyrighting an avatar might be that the user is simply recombining various elements which are each already part of a copyrighted work (the software) created by the company. However, at best this merely makes the avatar a derivative work. Think of a mobile made from random junk, or a scrapbook/collage created from photos and magazine clippings. Just because a user's medium and materials are limited by the confines of the software does not mean that the user's avatar has insufficient originality of expression.
    • When affording property rights, intellectual or otherwise, to users, we must not forget to account for the economic incentives this will create for the companies that create the software that enables the users in the first place. If every user has copyrights and other rights relating to their accounts and avatars, there could potentially be a negative impact on the innovation and creativity of the software companies. Is it worth it to Blizzard to deal with all the legal headaches associated with such rights? Maybe. Is it worth it to some capital-poor start-up company with the Next Big Thing? Maybe not.
      • Could this all be side-stepped with ToS and EULAs that assign rights to the company? Could this assignment be later revoked by the user? Is any of it enforceable?
  • Enforcement Issues
    • Do or should users have some sort of recourse for avatar defamation?
    • If they fall under copyright law, could avatars be seen as fundamentally useful articles and thus not protected? This would obviously require a case by case analysis ("No, Bob's character is not protectable simply because he carries an ornate giant broadsword; that sword is necessary for killing orcs").
    • How do we measure the value of virtual property across multiple platforms? Are there enforcement rights for invasion of my Second Life property if that invasion takes place on Facebook?
    • in a broader sense, how is virtual property tracked, and how does it evolve? at least one company has had the idea of creating some kind of central registrar: the rather unimpressive Digital Property Registry.

3. Speech and Censorship

  • Speech, Censorship, Statistics. Should we be concerned with an ISPs' and website owners' ability to aggregate and control information and speech? It seems that at least Google thinks that Internet users may be concerned with this topic. Google recently announced the "Transparency Report," which (incompletely) tracks usage statistics by country, as well as Google's removal of online material at the Government's request.Google How should corporations manage such governmental requests. What rules should it apply? How should it decide on a set of rules and whether they are catholic or case specific? What benefits are realized by providing publicly this information--particularly the tracking information? How can users or other entities use this information?
    • Should Google be required to censor search results when they are manipulated for harm by users? The Superior Court of Paris thought so in this ruling.
  • When using a digital soapbox created and hosted by a private party, what First Amendment speech rights does a user have? There may be a necessity to publicize a message using privately owned infrastructure. Does the owner of this infrastructure have a right to regulate the content of the messages being sent or discriminate between users on the basis of their messages' content? If the Internet is the new town square, then shouldn't everyone have the right to yell whatever nonsense they want?

Group Three:

-liability for security breaches (negligent design/management) -wikileaks! (jurisdictional problems, prosecution) (how does filtering affect wikileaks?) -transparency on internet services (google: how does it work?)

1. Liability for Security Breaches and Flaws

  • Software insecurity:
    • Security guru Bruce Schneier has argued that imposing tort liability is desirable as a method of forcing vendors to internalize the costs of insecure software. See Liability Changes Everything and Computer Security and Liability.
      • How convincing is his suggestion? What sorts of costs would this impose on software companies? Would such a rule drive small players out of the security market? Would individual contributors to open source projects potentially face liability?
    • Law professor Michael D. Scott makes a similar argument, and notes that Sarbanes-Oxley requires publicly traded companies to certify that their systems are secure, while imposing no obligations on the vendors who actually provide the software. See Tort Liability for Vendors of Insecure Software: Has the Time Finally Come?
      • There was a case in torts Kline v. 1500 Mass. Ave. that found that landlords are responsible to take reasonable security measures to protect tenants from attacks in common areas that are primarily under their control, such as lobbies or hallways by providing, for example, working locks. Could this case be applied to require ISPs and companies hosting websites wherein users store their personal information to protect that information from hackers and other security breaches? --Alissa.d.R. 01:38, 27 September 2010 (UTC)
  • Database insecurity:
    • Summaries of a few recent cases that address database breaches: Developments in Data Breach Liability.
    • Law professor Vincent R. Johnson argues that tort liability is an appropriate mechanism for creating incentives and managing risks associated with cybersecurity: Cybersecurity, Identity Theft, and the Limits of Tort Liability. Some issues he raises:
      • Duty to protect information: California's Security Breach Information Act imposes such a duty. The obligations that Graham-Leach-Blilely imposes on financial institutions arguably support liability on a theory of negligence per se.
        • Can market forces adequately address insufficient database security?
      • "Duty to inform of security breaches": This could be analogous to a failure to warn theory of negligence liability.
      • The economic harms rule seems to impose a significant bar to recovery. What about requiring the database-owner to pay for security monitoring? A risk-creation theory might support this approach.

--98.210.154.54 23:13, 21 September 2010 (UTC)Davis

2. WikiLeaks

Note: It seems to me, after reading the cybersecurity entry, that the "WikiLeaks" problem could be moved under that category and used as an example of how the Internet can magnify the consequences of a data breach in the physical world. Does anyone else agree? (I'd say no. There are other issues I've added that make it distinct. See below. -- Austin)

  • Real-world data breach: Soldier suspected of leaking classified military reporters to whistleblower website WikiLeaks [5]
    • This kind of leak is made much more likely by the growth of digital information. Spending two minutes to copy thousands of records to a CD labeled "Lady Gaga" v. making copies of the Pentagon Papers and smuggling them out under your shirt. Do we live in a more "leaky" age? If Wikileaks proves capable of protecting the anonymity of its contributors, should we come to expect that any information that is sufficiently important to public discourse will eventually find its way into the wild?
      • If these sorts of leaks become increasingly common, will there be a significant effect on the public's expectations as to government(s) transparency? Will Wikileaks-like organizations become an accepted "unofficial" path to the release of information? Or will an increased public expectation of transparency force less government secrecy (fewer documents classified, documents declassified sooner)?
      • What effect will Wikileaks have on transparency in private entities with significant public impact? Will the public or traditional press treat a leak from Monsanto the same way as a leak from the Department of Defense?
  • WikiLeaks posts documents without redacting the names of Afghans who provided intelligence to the United States. [6] The Taliban said it was using the WikiLeaks site to comb for names of Afghan informants, while the traditional press/gatekeepers said they had redacted the documents they posted to avoid "jeopardizing the lives of informants." [7] It seems as if the Internet has allowed people to bypass the traditional gatekeepers -- be they the government or the press -- and that this magnified the effects of the real-world data breach.
    • Wikileaks _did_ provide the Pentagon the opportunity to redact sensitive information from documents. [8] The Pentagon refused. To what degree should the Pentagon and other groups targeted by Wikileaks and similar organizations be willing to work with those organizations? From the perspective of the targeted group, cooperation with such a group legitimizes it and increases its public statute. From the perspective of the media organization, a working relationship with a targeted group may cause supporters to question the organization's independence. Is there a balance to be struck here? Do the formation of these relationships make "new media" organizations like Wikileaks look too similar to the "old media" organizations that depend greatly on relationships with government officials for content?
  • Jurisdictional problems and prosecution: The U.S. government is able to prosecute the real-world leaker, but likely won't be able to prosecute WikiLeaks -- the organization that used the Internet to magnify the effects of the leak, etc. -- because of jurisdictional problems and because of a lack of on-point law. [9]

--JPaul (Jenny)

  • What is the role of the primary source in our media landscape? The Wikileaks hasn't quite figured this out yet. In the Collateral Murder video release, for example, Wikileaks was criticized for releasing an edited video alongside the raw footage from a helicopter attack on journalists and civilians in Afghanistan. In the War Diaries release, however, Wikileaks received sharp criticism as described above. What is the role of the "gatekeeper" in today's media? Can primary source documents contribute effectively to discourse or do they provide so much information that a reader cannot process it all? Would the War Diaries release had greater focus if the vast number of reports had been reviewed by crowdsourcing?
  • How will Wikileaks shift public opinion with regard to the use of anonymous sources? As some media critics have pointed out, traditional media sources, even while criticizing anonymous or pseudonymous bloggers, rely heavily on the statements of anonymous officials in their reporting. [10] Will there be a backlash against anonymity or will it become more widely accepted? Is there a distinction to be made between anonymous sources and anonymous reporters? How do trust and reputation fit into all this? (this issue fits pretty well into Group 1's anonymity issue above)
  • Can the Wikileaks model of funding (releasing a big story and then relying upon donations) be a more widely applied to other forms of independent journalism?

Random Tangent Thought:

  • Can a lot of these issues be thought of as issues of access? Cyber can be seen as basically lowering the barriers to access that exist in physical reality.
    • A lot of the privacy concerns can be reframed as an issue of control over access — who has access to your facebook photos? Your search queries? Wikileaks documents? And for what purpose?
    • But access to information is largely what makes it such a powerful communication mechanism, which ties into access on the macro instead of micro level: who (internationally, domestically, economically) has access to cyber?
    • The aggregation of user information (facebook social graph, google trends, etc.) could be extremely valuable to the public, and denying anyone the ability to access it could be inhibiting the pursuit of knowledge about ourselves.
      • But who gets to access such aggregated information? The companies that set up the platforms through which the public expresses itself, or is it some sort of inherently-public property?
  • Tying it back to Lessig, is there a way to take the ideas that we develop about access to information in the cyber world and bring them back into the physical world?

Group Four:

- The legal or regulatory meaning of Net Neutrality Principle

  • According to Lessig, there is no fixed architecture for the Internet - they are all "code" written by human. The Net Neutrality principle and Prof Zittrain's concept of a "generative Internet" are attempts to lay down some fundamental values of what Internet architecture, given all possibilities, is the most desirable model that we should pursue.
  • Suppose the concept of Net Neutrality is clear (which is actually not), what are the practical regulatory implications of this principle should bring? There are seemingly contradictory practices in the name of Net Neutrality. For example, ISPs prohibit P2P file-sharing, alleging it takes up too much band width and other users are able to use the networks equally. These practical issues remind us of rethinking some fundamental problems on what neutrality means to the Internet.
  • Layers in regulations that protects Net Neutrality: Are there certain applications that should be given high priorities to run on the Internet, or should all applications be given the same weight? (For instance, should our policy equates the band width used for emails and web browsing to high-resolution video or gaming?)

-“to what extent is our judgment about tech related to the “coolness” of the tech itself?”

  • User Satisfaction versus Company Profitability. Closed platforms like the iPhone present significant benefits at a cost. It may be helpful to frame benefits and costs in terms of user satisfaction and company profitability, rather than any particular feature of the device using the platform. We can, of course, ask about particular features that create or diminish user satisfaction or company profitability, but we won't talk about the features as if they confer some independent benefit. This is just a way of conceptualizing when society will tolerate certain technological constraints.
    • The iPhone. Steve Jobs has a vision for the iPhone, and that includes regulating a large portion of what goes on and can go on the phone. Let's take a look at how the user satisfaction/company profitability model applies.
      • Profitability. The iPhone's closed platform provides at least two valuable and related benefits. First, it allows Apple to keep its operating environment "safe." Without unauthorized third-party applications--i.e., with all apps being Apple-approved--there is less risk for the introduction and dissemination of malware. This reduces costs for Apple, which doesn't have to respond to consumers whose phones have been destroyed by viruses. A second related benefit is branding. Because Apple can keep its system closed, it can design the environment in which it operates and market that environment as a product. This design means Apple can extract profits form third-party apps by conditioning access upon, among other things, payment. It also makes the company more profitable because Apple can advertise and promote itself as a "safe" place that operates seamlessly. Nevertheless, this raises issues about how far Apple will regulate its platform. Will it simply condition access by third-party applications, or will it go further and monitor its users. If Jobs is concerned that users will upload pornographic pictures on his phone, will the future iPhone be programmed to identify automatically and remove or block such photos? Does Jobs' vision relate to profitability, or simply personal preference? (This last question will be relevant to considering user satisfaction).
      • User Satisfaction. For most users, the iPhone's closed platform doesn't seem to cause any immediate problems. There are plenty of cool apps that individuals can download and use. The iPhone certainly scores high on aesthetics, even if some of its features are low on performance. Users tend to love aesthetics, and have overlooked the fact that, for instance, the iPhone can run only one program at a time. The closed platform's safety also provides a benefit to users, who don't have to worry about protecting their phones from malware. So far, user satisfaction is high. The balance between user satisfaction and profitability seems to be in equipoise--for now. The question for the future is whether Apple will close off more territory, and whether its current sectioning will stifle the actions of users in the future. As to the former, Apple might meet substantial resistance from the public if it begins regulating their private behavior more explicitly. As to the latter, the future is hard to predict. If users become more adept with their phones or demand new features that the closed system stifles, Apple may have to modify just 'how' closed its system should be. Of course, it may respond by making even "cooler" design, thereby satisfying users sufficiently to distract attention from the new (or old) restrictions that remain in place. If consumers detect that Jobs' personal preferences are dictating the ways they can use their phones, their dissatisfaction may win the day.
    • "Pandora Hour Limits." Pandora's 40-hour (not sure if that is the exact number, but the important part is that there is a limit) limit for free users has had an impact for avid-users, taking away from their satisfaction.
      • "Profitability." The only way to take advantage of a freemium model of revenue is to provide users with more incentive to go premium rather than the non display of ads. Users seem to be satisfied with this because of the loophole of just creating new accounts, however, this is also a process not liked by consumers.
      • "User Satisfaction." Often times unnoticed, not causing immediate problems. There are plenty ways users have gotten around this restriction, especially by just creating a new account that requires only an email address.

Lack of Humans in Online Transactions

The process of purchasing something online has almost become too easy for users and is a process that is generally irreversible especially for a website like Ebay. Amazon introduced one-click purchasing where, with the input of a single word, your credit card is charge and the item is shipped. There is no human contact on the receiving end of a transaction, leading to a significant amount of error and non-intended expenditure. More human contact is needed and the process needs to be slowed down to ensure privacy and accuracy.

  • online transaction speed: feature or bug?
  • lack of humans in online transactions: feature or bug?
    • We seem to be moving in a direction where the ability to engage with a human who works at a commercial business significantly decreases but the ability to communicate with other customers has significantly increases. This is a dramatic shift from how relationships with businesses previously existed -- one would easily be able to find the proprietor of a business but would never be able to interact with the customers that came before and after. What does this mean for the way we make purchases? Does this change the nature of "the customer" as viewed by employers?
    • There are websites dedicated to helping users get access to human support.[11]
  • Computers and people gone wild! (please don’t google this)

- Should everything be open-source?

  • A closed platform means that things can be innovative only within a predetermined limit; that is, we can only work within the realm of the expected (e.g., apps for the iPhone). But some of the greatest innovations have changed the paradigm for innovation completely, the obvious example being the Internet. The cost of closed platforms is that we do not even know what we're missing -- are security and cool apps worth it?
    • Alternatively, if everything were open-source, would we face some variant of the tragedy of the commons? (Tragedy of the commons -- In ye olde England, there was a public commons where everyone could let their cattle graze. But because it was a public space, no one took responsibility for it, so all the grass ran out and the place was a mess. Then the commons was privatized, and lo and behold, private ownership meant that the owner now had an investment and interest in the land, so the land became nice and green again. Even if the owner now charged people to let their cattle graze there. [12]) Or is there something different about the ethos of the Internet, or about cyberspace as a space, that makes the tragedy of the commons a non-issue?