Class 2
Group One:
-Identity revealed beyond your comfort zone (ex. WOW message boards: forced real identity). -Can online identity be protected as a possession? Who owns profile pages? -Data portability as a privacy policy (who owns shared data?)(single sign-in)(facebook Connect)(OpenID)(persistent identity online). -Cyberbullies, multiple identities online. -How/can IRL ethics/morality be imposed in online spaces? Should they be?
1. The Right to Speak Anonymously
- It would seem that the easiest way to impose IRL ethics/morality in online spaces is to make our online identities tied more closely to our 'real' identities. But at the extreme, with everyone having a single, unique online identity tied to something like Social Security numbers, we would be sacrificing our right to speak and act anonymously online. Is there a happy medium?
- Then again, it's likely that with cyberbullying, for example, that the kids being bullied know exactly who their antagonists are, meaning that anonymity is not at the heart of the problem. So what is? Is it a lack of consequences? Or consequences that, because they are in 'real' life, are insufficiently tied to their online behavior?
- Although in some cases, such as the highly publicized Megan Meier case, it was not known who the cyberbully was. In fact, in that case, the "real world" bully was unknown in large part because the profile was of a fictitious person to mask the real identity of the bullies.
- Is the Right to Speak Anonymously (or even the Right to Freedom of Expression?) harmed by forcing online users to sign in with "real identities" (achieved by requiring either verified credit-card billing names, or in less strict cases Facebook Connect) to leave comments on newspaper websites rather than leaving them anonymously as common practice in the early days of the internet
- The analogy to traditional 'letters to the editor' in print editions does not hold as it was much more difficult - if not impossible - to conduct a search of all the comments a given person had submitted to newspapers - in today's world, such as search would allow anyone to quickly pull together a portfolio of comments left by a given person across multiple publications.
- Another case to consider with anonymity and the internet is the situation where people seek to keep their real-world actions (part of their real-world identity) off the internet. We have discussed that online identities have been increasingly merging with real-world identities through the use of real names, photographs, etc. on Facebook and similar sites. We have also mentioned, in the context of WoW/XBOX among others, some people seeking to maintain fictitious or anonymous online identities. A recent Supreme Court decision in Doe v. Reed, however, considered when real-world acts are publicized, and importantly for our purposes publicized on the internet. In that case, petition signers hoping to repeal a Washington state law which granted same-sex couples rights akin to marriage sought to prohibit others from gaining access to the petitions under the First Amendment. Opponents of the petitions intended to put the names and addresses on the petitions online in a searchable format. The Supreme Court held that public disclosures of referendum petition signers and their addresses, on its face, did not present a First Amendment violation.
- What is the effectiveness though of a single, unique online identity?
- Possible case study: Microsoft's XBOX Live online service
- Microsoft's XBOX Live services assign users a unique online username that identifies the user across all the games and services offered by XBOX Live. This unified identity allows a user to easily maintain relationships with other users, compare past accomplishments and activities, and effectively establish an online community.
- From a Lessig framework, Microsoft has used 3 of the 4 regulators to motivate people to become attached to one identity and work hard to preserve its reputation.
- Norms: Microsoft allows users to rate other users and assign positive and negative feedback. Other users can easily access this information and determine if this is someone they want to associate with. A user's accomplishments and stats from playing games are tied to his unique identity, making it a valuable indicator of skill and status in the online community.
- Market: In order to acquire an XBOX Live account, a user must pay $60 for a year-long subscription. A subscription only gives a user access to one username and, therefore, one identity. If someone wanted to create another identity or if Microsoft banned a user from XBOX Live, that user would have to pay for another account.
- Architecture: Microsoft has built in the rating system listed above. As a closed platform, Microsoft also has the ability to ban a user from online activities. This would force the user to purchase another account and would prevent that user from associating himself or herself with past accomplishments and reputation. Given this ominous power, one should be extra careful not to do anything to warrant banning.
- Laws: nothing outside of normal tort laws
- Result: this requires more research and testing. However, common wisdom (note: this is from my own personal experience has someone who has played online and has read many opinions about the service) is that communication on XBOX Live is a morass of racist, sexist, and violent comments. Many individuals refuse to communicate online anymore. Despite all of Microsoft's safeguards, there is not an effective deterrant to this type of behavior.
- Are entities like Microsoft hampered by the fact that these online identities, in a sense, don't matter? If I have a unique XBOX Live identity, how am I harmed outside of the XBOX Live community if I act poorly online and am banned? This won't harm my relationships outside of this online realm or ability to get jobs.
- Do we need to make more "real world" ramifications? For example, if law firms required me to list my XBOX Live account name, my Facebook account URL, and my Twitter name (and required me to make all of them public), that would greatly change how I act online. Is the notion of an online identity affected by OCS telling all students seeking law firm positions to make their Facebook profiles as secret as possible?
- How does requiring a persistent identity mesh with policies behind law? Minors may have opportunity to expunge or seal criminal records under the concept of learning and youthful mistakes. However, this is a system completely controlled by the government. Given the nature of the Internet, it may be impossible to offer a similar service, as website could continue to cite the damages caused by an online user, who is easily traceable to a real world individual. If we could expunge a minor's record, do we want to? For security reasons, we may not want minors to be identified as such online. Therefore, people will treat an online identity that belongs to a minor as if it belonged to an adult. Similar to minor being held to adult standards when participating in adult activities under tort law, should we hold minors to adult standards if they are perceived as adults in the online realm (which would include not allowing their record to be erased)?
- Google CEO Eric Schmidt predicts, according to an August 4, 2010 interview in the WSJ, that "every young person one day will be entitled automatically to change his or her name on reaching adulthood in order to disown youthful hijinks stored on their friends' social media sites."
- However, allowing someone upon turning 18 to disown "youthful hijinks" promotes a culture that separates consequences from actions. Instead of eliminating the past, why don't we provide it with more context? As a proposal, why don't we use the architecture/law prongs of the Lessig test to create a structure in which the online activities of an individual, from his first entry into the online world to the last, are stored on a server (this is extremely big brother-ish but let's just play this out). The user can establish as many identities online to represent themselves to other uses, but all of these identities are connected to the user's real world identity. All of the actions a user takes as a minor are branded as the actions of a minor. The way we would then use this information would be similar to a background or credit check. Employers looking to hire the user can request a report on his Internet activity. They would then receive a report that details his actions. This could either be exhaustive, a general overview, or just if others have complained about his actions. This report will indicate what the user did and when in his lifetime he did it. Therefore, if the user did something embarassing or bad, this report will provide more context than a mere Google search.
- How do we teach youths just entering the online world how to interact with it and maintain a praiseworthy identity? [WSJ
2. Facebook Profile Portability Let's do more research into data portability as a privacy policy, which relates to above. Facebook could be a good case study. What options and protections are there to port an online identity / profile i.e. Facebook messages, friend listings, and wall-postings? What can and can not be permanently deleted?
- It has been argued that Facebook has created a "semipublic" shared space for exchange of information. NYT If I send a private message or make a post on my wall, such information would be owned by me. But what about posts made by other people on my wall, or pictures and video I have been tagged in?
- What happens to these shared data if I close my account?
- Should information uploaded by other people become part of my online identity? If I look at my facebook wall I can see that only a minor part of it is made of my own contribution. Would my online identity be the same without other people's contributions?
- And what about my posts on other people's wall? Are those part of my online identity? If we own our personal information, should we own also our posts on other people's walls?
- Let's assume we have complete online portability of our online identity, including material submitted by third parties: what are the privacy implication of this from a third party's perspective? Are we ok with the third party's posts and tagged pictures being transferred? Should the third party be notified? Should the third party give express consent?
3. Applying Privacy Policies Worldwide What are the challenges social networks face at the international level and in countries other than the US?
- Are privacy policies adopted by social networks enforceable everywhere?
- Consider Facebook approach: Facebook adheres to the Safe Harbor framework developed between US and EU as regards the processing of users' personal information. Safe Harbor Is this enough to shield Facebook from privacy claims coming from outside the US? What about countries outside the UE?
- Should Facebook be concerned at all about its international responsibility? Consider the case of the Google executives convicted in Italy for a breach of privacy legislation. Assuming the conviction is upheld in appeal, can it ever be enforced? Where are the offices of the company? Where are the servers? Where are the data actually stored and processed?
- More generally, what types of information created by users is 'personal data' about which they have/should have a reasonable expectation of privacy and should be subject to regulation?
- The line between personal information about which people have a reasonable expectation of privacy and information that is not personal and that need not have restrictions relating to privacy can be a difficult one to define. For example, is information about how a driver drives a car that gets recorded on an in-car computer and potentially transmitted to a car rental or the car manufacturer 'personal' information that is/should be covered by data protection laws? What about information that is picked up by google when taking images for google street view (e.g. IP addresses of neighbouring properties)? (See discussion in Information Commissioner’s Office (UK), Statement on Google Street View, August 2010). The problem is that in many cases this information on its own does not identify a particular individual but that it could be used in combination with other information to identify people. Yet when we use the internet so much information is created and it may not all be information that should be subject to privacy regulation. See discussion about this problem in a New Zealand context in Review of the Privacy Act 1993, NZLC 17, Wellington, 2010 (Australia and the UK are considering similar issues).
4. Cyber-security
- Cyber-space was first used by script-kiddies as a playground for web defacement, etc, then discovered by criminals as a new means to expand their activity followed by transnational crime syndicates, followed by hackers with a political - "hacktivists" - until eventually also government discovered cyber-space. Since the DDoS attacks on Estonian websites in 2007 pushed the issue in NATO circles, cyber-security has been increasingly in the headlines. A number of questions emerge from this:
- Real threat vs. threat inflation. How much of the what is written in newspaper articles and books is much ado about nothing and what can be considered a real risk? If there is a risk, is there also a threat? What determines what constitutes a threat? Richard Clarke's book "Cyber-war" paints a gloomy picture. Self-interest by an author working as a cyber-security consultant or is there more to it?
- Cyber-crime <-> cyber-espionage <-> cyber-hacktivism <-> cyber-terrorism <-> cyber-war (cyber-intrastate war/cyber-interstate war). Costs today? Costs tomorrow? Technical solutions? Policy/legal solutions? National/international level? State vs non-state actors? Public/private?
- Cyber-war vs. cyber-peace. Why is much of the literature using language such as "cyber-war", "cyber-attack", etc and not language such as "cyber-peace", "cyber-cooperation"
- Terminology. What is the difference between a cyber-hacktivist and a cyber-terrorist? What constitutes a "cyber-attack"? Given cyber-space's virtual borderlessness is it appropriate to speak of defense/offense or active/passive (e.g. the Outer Space convention)? Is cyber-space a territorium like the High Seas, Antarctica or Outer Space? Or a new field after land (army), sea (navy), air (air force), cyber? Is cyberspace a "cultural heritage of mankind"? Relationship between virtual and kinetic.
- Civilian vs military. How is cyber-security changing the relationship between civilian and military? DoD is responsible to defend .mil, DHS responsible to defend .gov. What about the other domains? The German DoD is responsible to defend the German military network, the Ministry of Interior responsible for the government websites. How do civilian Ministries of Interior with police forces relate to a cyber-attack outside the country usually an international attack being the responsibility of the military branch of a democratic government? What are the lines of authority, e.g. for the planting of logic bombs or trapdoors?
- Role of private actors. How are ISPs, hardware and software companies integrated into the discussions/policy-/law-making process? How much power do they have? Allegiance to profit? Allegiance to country? Allegiance to open cyber-space? Are there public private partnerships? Do they work? What are their strengths/weaknesses?
- Role of hackers. In the early days, the battle was government vs. hacker or state vs. hacker guided by a hacker ethics. This was before the internet expanded around the globe and in the Western tradition of state vs individual. After the expansion, how has this relationship changed? Is there a transnational hacker-culture or are hackers of country X more closely aligned with government of country X vs hackers of country Y more closely aligned with government of country Y rather than hackers of X and Y aligned vs governments of X and Y?
- With the attribution problem and the transition problem (virtual-physical world) how much security is necessary and how much generativity possible? What can be done to reduce the risk? What can be done to reduce the threat? International convention? Code of conduct among major companies? International confidence-building measures?
- Enforcement. How could an international regime/agency look like solving the security dilemma? A cyber-IAEA?
Group Two:
-property -online things acquiring IRL value -what happens to digital possessions after death? -who has access to your accounts (fb, twit, gmail, etc) after death -(TOS after death) -first sale doctrine in software -first amendment rights with online comms (going through someone’s infrastructure)
- Speech, Censorship, Statistics. Should we be concerned with an ISPs' and website owners' ability to aggregate and control information and speech. It seems that at least Google thinks that Internet users may be concerned with this topic. Google recently announced the "Transparency Report," which (incompletely) tracks usage statistics by country, as well as Google's removal of online material at the Government's request.Google How should corporations manage such governmental requests. What rules should it apply? How should it decide on a set of rules and whether they are catholic or case specific? What benefits are realized by providing publicly this information--particularly the tracking information? How can users or other entities use this information?
Group Three:
-liability for security breaches (negligent design/management) -wikileaks! (jurisdictional problems, prosecution) (how does filtering affect wikileaks?) -transparency on internet services (google: how does it work?)
1. Liability for Security Breaches and Flaws
- Software insecurity:
- Security guru Bruce Schneier has argued that imposing tort liability is desirable as a method of forcing vendors to internalize the costs of insecure software. See Liability Changes Everything and Computer Security and Liability.
- How convincing is his suggestion? What sorts of costs would this impose on software companies? Would such a rule drive small players out of the security market? Would individual contributors to open source projects potentially face liability?
- Law professor Michael D. Scott makes a similar argument, and notes that Sarbanes-Oxley requires publicly traded companies to certify that their systems are secure, while imposing no obligations on the vendors who actually provide the software. See Tort Liability for Vendors of Insecure Software: Has the Time Finally Come?
- Security guru Bruce Schneier has argued that imposing tort liability is desirable as a method of forcing vendors to internalize the costs of insecure software. See Liability Changes Everything and Computer Security and Liability.
- Database insecurity:
- Summaries of a few recent cases that address database breaches: Developments in Data Breach Liability.
- Law professor Vincent R. Johnson argues that tort liability is an appropriate mechanism for creating incentives and managing risks associated with cybersecurity: Cybersecurity, Identity Theft, and the Limits of Tort Liability. Some issues he raises:
- Duty to protect information: California's Security Breach Information Act imposes such a duty. The obligations that Graham-Leach-Blilely imposes on financial institutions arguably support liability on a theory of negligence per se.
- Can market forces adequately address insufficient database security?
- "Duty to inform of security breaches": This could be analogous to a failure to warn theory of negligence liability.
- The economic harms rule seems to impose a significant bar to recovery. What about requiring the database-owner to pay for security monitoring? A risk-creation theory might support this approach.
- Duty to protect information: California's Security Breach Information Act imposes such a duty. The obligations that Graham-Leach-Blilely imposes on financial institutions arguably support liability on a theory of negligence per se.
--98.210.154.54 23:13, 21 September 2010 (UTC)Davis
Group Four:
-“to what extent is our judgment about tech related to the “coolness” of the tech itself?”
- User Satisfaction versus Company Profitability. Closed platforms like the iPhone present significant benefits at a cost. It may be helpful to frame benefits and costs in terms of user satisfaction and company profitability, rather than any particular feature of the device using the platform. We can, of course, ask about particular features that create or diminish user satisfaction or company profitability, but we won't talk about the features as if they confer some independent benefit. This is just a way of conceptualizing when society will tolerate certain technological constraints.
- The iPhone. Steve Jobs has a vision for the iPhone, and that includes regulating a large portion of what goes on and can go on the phone. Let's take a look at how the user satisfaction/company profitability model applies.
- Profitability. The iPhone's closed platform provides at least two valuable and related benefits. First, it allows Apple to keep its operating environment "safe." Without unauthorized third-party applications--i.e., with all apps being Apple-approved--there is less risk for the introduction and dissemination of malware. This reduces costs for Apple, which doesn't have to respond to consumers whose phones have been destroyed by viruses. A second related benefit is branding. Because Apple can keep its system closed, it can design the environment in which it operates and market that environment as a product. This design means Apple can extract profits form third-party apps by conditioning access upon, among other things, payment. It also makes the company more profitable because Apple can advertise and promote itself as a "safe" place that operates seamlessly. Nevertheless, this raises issues about how far Apple will regulate its platform. Will it simply condition access by third-party applications, or will it go further and monitor its users. If Jobs is concerned that users will upload pornographic pictures on his phone, will the future iPhone be programmed to identify automatically and remove or block such photos? Does Jobs' vision relate to profitability, or simply personal preference? (This last question will be relevant to considering user satisfaction).
- User Satisfaction. For most users, the iPhone's closed platform doesn't seem to cause any immediate problems. There are plenty of cool apps that individuals can download and use. The iPhone certainly scores high on aesthetics, even if some of its features are low on performance. Users tend to love aesthetics, and have overlooked the fact that, for instance, the iPhone can run only one program at a time. The closed platform's safety also provides a benefit to users, who don't have to worry about protecting their phones from malware. So far, user satisfaction is high. The balance between user satisfaction and profitability seems to be in equipoise--for now. The question for the future is whether Apple will close off more territory, and whether its current sectioning will stifle the actions of users in the future. As to the former, Apple might meet substantial resistance from the public if it begins regulating their private behavior more explicitly. As to the latter, the future is hard to predict. If users become more adept with their phones or demand new features that the closed system stifles, Apple may have to modify just 'how' closed its system should be. Of course, it may respond by making even "cooler" design, thereby satisfying users sufficiently to distract attention from the new (or old) restrictions that remain in place. If consumers detect that Jobs' personal preferences are dictating the ways they can use their phones, their dissatisfaction may win the day.
- [Please add another example.]
- The iPhone. Steve Jobs has a vision for the iPhone, and that includes regulating a large portion of what goes on and can go on the phone. Let's take a look at how the user satisfaction/company profitability model applies.
-online transaction speed: feature or bug? -lack of humans in online transactions: feature or bug? - Computers and people gone wild! (please don’t google this)
- Should everything be open-source?
- A closed platform means that things can be innovative only within a predetermined limit; that is, we can only work within the realm of the expected (e.g., apps for the iPhone). But some of the greatest innovations have changed the paradigm for innovation completely, the obvious example being the Internet. The cost of closed platforms is that we do not even know what we're missing -- are security and cool apps worth it?
- Alternatively, if everything were open-source, would we face some variant of the tragedy of the commons? (Tragedy of the commons -- In ye olde England, there was a public commons where everyone could let their cattle graze. But because it was a public space, no one took responsibility for it, so all the grass ran out and the place was a mess. Then the commons was privatized, and lo and behold, private ownership meant that the owner now had an investment and interest in the land, so the land became nice and green again. Even if the owner now charged people to let their cattle graze there. [1]) Or is there something different about the ethos of the Internet, or about cyberspace as a space, that makes the tragedy of the commons a non-issue?