Class 2
Group One:
-Identity revealed beyond your comfort zone (ex. WOW message boards: forced real identity). -Can online identity be protected as a possession? Who owns profile pages? -Data portability as a privacy policy (who owns shared data?)(single sign-in)(facebook Connect)(OpenID)(persistent identity online). -Cyberbullies, multiple identities online. -How/can IRL ethics/morality be imposed in online spaces? Should they be?
1. The Right to Speak Anonymously
- It would seem that the easiest way to impose IRL ethics/morality in online spaces is to make our online identities tied more closely to our 'real' identities. But at the extreme, with everyone having a single, unique online identity tied to something like Social Security numbers, we would be sacrificing our right to speak and act anonymously online. Is there a happy medium?
- Then again, it's likely that with cyberbullying, for example, that the kids being bullied know exactly who their antagonists are, meaning that anonymity is not at the heart of the problem. So what is? Is it a lack of consequences? Or consequences that, because they are in 'real' life, are insufficiently tied to their online behavior?
- Although in some cases, such as the highly publicized Megan Meier case, it was not known who the cyberbully was. In fact, in that case, the "real world" bully was unknown in large part because the profile was of a fictitious person to mask the real identity of the bullies.
- Perhaps we should split it into two categories -- "real name" bullying and "masked" bullying. "Real name" bullying usually takes place on sites like Facebook and MySpace (the Meier case notwithstanding) and over IM, and may be a group of middle school or high schoolers who have taken to picking on one kid online. The total lack of adult or authoritative supervision (ie, no threat of a teacher walking around the hallway corner to see the taunting) could be what fuels this. Thus, some mode of authoritative supervision could calm this type of bullying. On the other hand, "masked" bullying takes place in larger arenas, such as JuicyCampus, CollegeACB (see http://www.collegeacb.com/ and a corresponding article: http://ksusentinel.com/arts-living/students-become-source-of-anonymous-bullying/ ), or perhaps even message boards like World of Warcraft, where the the bullies are anonymous posters or use pseudonyms not connected to their legal names. Here, it may be harder to have any notion of supervision, because the sites themselves facilitate and promote anonymity and likely will invoke free speech claims to avoid working with regulators. -- JPaul (Jenny)
- Is the Right to Speak Anonymously (or even the Right to Freedom of Expression?) harmed by forcing online users to sign in with "real identities" (achieved by requiring either verified credit-card billing names, or in less strict cases Facebook Connect) to leave comments on newspaper websites rather than leaving them anonymously as common practice in the early days of the internet
- The analogy to traditional 'letters to the editor' in print editions does not hold as it was much more difficult - if not impossible - to conduct a search of all the comments a given person had submitted to newspapers - in today's world, such as search would allow anyone to quickly pull together a portfolio of comments left by a given person across multiple publications.
- Humorous example of this: "Get to Know an Internet Commentator" by Kevin Collier.
- Another case to consider with anonymity and the internet is the situation where people seek to keep their real-world actions (part of their real-world identity) off the internet. We have discussed that online identities have been increasingly merging with real-world identities through the use of real names, photographs, etc. on Facebook and similar sites. We have also mentioned, in the context of WoW/XBOX among others, some people seeking to maintain fictitious or anonymous online identities. A recent Supreme Court decision in Doe v. Reed, however, considered when real-world acts are publicized, and importantly for our purposes publicized on the internet. In that case, petition signers hoping to repeal a Washington state law which granted same-sex couples rights akin to marriage sought to prohibit others from gaining access to the petitions under the First Amendment. Opponents of the petitions intended to put the names and addresses on the petitions online in a searchable format. The Supreme Court held that public disclosures of referendum petition signers and their addresses, on its face, did not present a First Amendment violation.
- The analogy to traditional 'letters to the editor' in print editions does not hold as it was much more difficult - if not impossible - to conduct a search of all the comments a given person had submitted to newspapers - in today's world, such as search would allow anyone to quickly pull together a portfolio of comments left by a given person across multiple publications.
- If there is a right to speak anonymously, what are its limits? Are third parties required to reveal real identities when necessary to name a party to a lawsuit? What showing of harm to another person might be required? Under what circumstances are subpoenas of internet service providers legal? It seems to me that the First Amendment justifications for anonymity are lacking in cases that cause harm to others, like cyberbullying, such that any right is extinguished. How absolute is this right?
- What is the effectiveness though of a single, unique online identity?
- Possible case study: Microsoft's XBOX Live online service
- Microsoft's XBOX Live services assign users a unique online username that identifies the user across all the games and services offered by XBOX Live. This unified identity allows a user to easily maintain relationships with other users, compare past accomplishments and activities, and effectively establish an online community.
- From a Lessig framework, Microsoft has used 3 of the 4 regulators to motivate people to become attached to one identity and work hard to preserve its reputation.
- Norms: Microsoft allows users to rate other users and assign positive and negative feedback. Other users can easily access this information and determine if this is someone they want to associate with. A user's accomplishments and stats from playing games are tied to his unique identity, making it a valuable indicator of skill and status in the online community.
- Market: In order to acquire an XBOX Live account, a user must pay $60 for a year-long subscription. A subscription only gives a user access to one username and, therefore, one identity. If someone wanted to create another identity or if Microsoft banned a user from XBOX Live, that user would have to pay for another account.
- Architecture: Microsoft has built in the rating system listed above. As a closed platform, Microsoft also has the ability to ban a user from online activities. This would force the user to purchase another account and would prevent that user from associating himself or herself with past accomplishments and reputation. Given this ominous power, one should be extra careful not to do anything to warrant banning.
- Laws: nothing outside of normal tort laws
- Result: this requires more research and testing. However, common wisdom (note: this is from my own personal experience has someone who has played online and has read many opinions about the service) is that communication on XBOX Live is a morass of racist, sexist, and violent comments. Many individuals refuse to communicate online anymore. Despite all of Microsoft's safeguards, there is not an effective deterrant to this type of behavior.
- Are entities like Microsoft hampered by the fact that these online identities, in a sense, don't matter? If I have a unique XBOX Live identity, how am I harmed outside of the XBOX Live community if I act poorly online and am banned? This won't harm my relationships outside of this online realm or ability to get jobs.
- Do we need to make more "real world" ramifications? For example, if law firms required me to list my XBOX Live account name, my Facebook account URL, and my Twitter name (and required me to make all of them public), that would greatly change how I act online. Is the notion of an online identity affected by OCS telling all students seeking law firm positions to make their Facebook profiles as secret as possible?
- How does requiring a persistent identity mesh with policies behind law? Minors may have opportunity to expunge or seal criminal records under the concept of learning and youthful mistakes. However, this is a system completely controlled by the government. Given the nature of the Internet, it may be impossible to offer a similar service, as website could continue to cite the damages caused by an online user, who is easily traceable to a real world individual. If we could expunge a minor's record, do we want to? For security reasons, we may not want minors to be identified as such online. Therefore, people will treat an online identity that belongs to a minor as if it belonged to an adult. Similar to minor being held to adult standards when participating in adult activities under tort law, should we hold minors to adult standards if they are perceived as adults in the online realm (which would include not allowing their record to be erased)?
- Google CEO Eric Schmidt predicts, according to an August 4, 2010 interview in the WSJ, that "every young person one day will be entitled automatically to change his or her name on reaching adulthood in order to disown youthful hijinks stored on their friends' social media sites."
- However, allowing someone upon turning 18 to disown "youthful hijinks" promotes a culture that separates consequences from actions. Instead of eliminating the past, why don't we provide it with more context? As a proposal, why don't we use the architecture/law prongs of the Lessig test to create a structure in which the online activities of an individual, from his first entry into the online world to the last, are stored on a server (this is extremely big brother-ish but let's just play this out). The user can establish as many identities online to represent themselves to other uses, but all of these identities are connected to the user's real world identity. All of the actions a user takes as a minor are branded as the actions of a minor. The way we would then use this information would be similar to a background or credit check. Employers looking to hire the user can request a report on his Internet activity. They would then receive a report that details his actions. This could either be exhaustive, a general overview, or just if others have complained about his actions. This report will indicate what the user did and when in his lifetime he did it. Therefore, if the user did something embarassing or bad, this report will provide more context than a mere Google search.
- Should the notion of an online identity include search results? If so, how should this information be controlled? Wikis indicate when an entry is in dispute. Could search engines provide a similar service for websites attacking individuals? Is it technically possible to screen for negative publicity and separate out these websites in a search?
- In an article on his blog, Professor Zittrain discusses the concept of reputation bankruptcy.
- I find the tie between the solution to identity issues and bankruptcy to be troubling. First off, one of the major problems with online reputation is not a completely toxic reputation, but a singular occasion of poor discretion or the malicious agendas of a few individuals. In the AutoAdmit case, users of a law school message board maliciously attacked a Yale Law student. These negative posts were among the first page of hits in a Google search of the student's name. The student received no offers from law firms and although no causal connection was proved, these negative posts could have been a reason why she did not receive offers. In cases like these, individuals need selective erasing of third party attacks or a past mistake that are unfair restrictions on a promising future. However, Professor Zittrain suggests that reputation bankruptcy could involve wiping everything cleaing, "throwing out the good along with the bad." The new start of bankruptcy is a dramatic change and should be used for dire situations. For many cases, it may be a case of using a sledgehammer to kill a fly.
- Another concern is similar to the one I proposed to the idea of "disown[ing] youthful hijinks." If someone can declare reputation bankruptcy, that individual is saved but the communities he or she was a part of are still scarred by that individual's actions. If, in the aforementioned XBOX Live case study, a user could declare reputational bankruptcy, that would do nothing to help XBOX Live's negative reputation and people's unwillingness to engage in that community. Under bankruptcy, creditors are able to receive some amount of compensation for the debts owed to them. What kind of compensation can online communities receive in reputation bankruptcy?
- How do we teach youths just entering the online world how to interact with it and maintain a praiseworthy identity? [WSJ
2(a). Facebook Profile Portability Let's do more research into data portability as a privacy policy, which relates to above. Facebook could be a good case study. What options and protections are there to port an online identity / profile i.e. Facebook messages, friend listings, and wall-postings? What can and can not be permanently deleted?
- It has been argued that Facebook has created a "semipublic" shared space for exchange of information. NYT If I send a private message or make a post on my wall, such information would be owned by me. But what about posts made by other people on my wall, or pictures and video I have been tagged in?
- What happens to these shared data if I close my account?
- Should information uploaded by other people become part of my online identity? If I look at my facebook wall I can see that only a minor part of it is made of my own contribution. Would my online identity be the same without other people's contributions?
- And what about my posts on other people's wall? Are those part of my online identity? If we own our personal information, should we own also our posts on other people's walls?
- Let's assume we have complete online portability of our online identity, including material submitted by third parties: what are the privacy implication of this from a third party's perspective? Are we ok with the third party's posts and tagged pictures being transferred? Should the third party be notified? Should the third party give express consent?
2(b). General Online Identity Portability While 2(a) focuses on Facebook, how about tackling the more general question about our online identities:
- How should online identity be regulated?
- Lessig's framework, incl. benefit analysis of potential laws, norms, market mechanisms?
- re laws: 3 privacy laws passed at state level, federal privacy law potentially coming in next Congress - what will (should?) it include regarding online identity?
- re norms: self-regulation: could talk about OAuth, OpenID here and what role they could play
- re market place: thinking about Google's leaked internal communication from 2008 about creating a market place for privacy/personal data)? See: [[1]]
- Lessig's framework, incl. benefit analysis of potential laws, norms, market mechanisms?
- What frameworks / initiatives do currently exist? Who has, should have control - Government vs private sector (Could Facebook be the personal online ID provider)?
- NSTIC (National Strategy for Trusted Identities in Cyberspace) [[2]], first draft strategy document published in June 2010, available for download on homepage
- Goal of initiative: Identify solutions ensuring (1) Privacy, (2) Security, (3) Interoperability, (4) Ease-of-Use
- Key roles and perspectives to analyze: Individuals, Identity Providers, Attribute Providers, Relying Party
- (Source: NSTIC Presentation, Trusted Identities Panel, OTA Online Security and Cybersecurity Forum, Washington D.C., September 24, 2010) --Reinsberg 19:33, 26 September 2010 (UTC)
- NSTIC (National Strategy for Trusted Identities in Cyberspace) [[2]], first draft strategy document published in June 2010, available for download on homepage
3. Applying Privacy Policies Worldwide What are the challenges social networks face at the international level and in countries other than the US?
- Are privacy policies adopted by social networks enforceable everywhere?
- Consider Facebook approach: Facebook adheres to the Safe Harbor framework developed between US and EU as regards the processing of users' personal information. Safe Harbor Is this enough to shield Facebook from privacy claims coming from outside the US? What about countries outside the UE?
- Should Facebook be concerned at all about its international responsibility? Consider the case of the Google executives convicted in Italy for a breach of privacy legislation. Assuming the conviction is upheld in appeal, can it ever be enforced? Where are the offices of the company? Where are the servers? Where are the data actually stored and processed?
- More generally, what types of information created by users is 'personal data' about which they have/should have a reasonable expectation of privacy and should be subject to regulation?
- The line between personal information about which people have a reasonable expectation of privacy and information that is not personal and that need not have restrictions relating to privacy can be a difficult one to define. For example, is information about how a driver drives a car that gets recorded on an in-car computer and potentially transmitted to a car rental or the car manufacturer 'personal' information that is/should be covered by data protection laws? What about information that is picked up by google when taking images for google street view (e.g. IP addresses of neighbouring properties)? (See discussion in Information Commissioner’s Office (UK), Statement on Google Street View, August 2010). The problem is that in many cases this information on its own does not identify a particular individual but that it could be used in combination with other information to identify people. Yet when we use the internet so much information is created and it may not all be information that should be subject to privacy regulation. See discussion about this problem in a New Zealand context in Review of the Privacy Act 1993, NZLC 17, Wellington, 2010 (Australia and the UK are considering similar issues).
- Who/what are the main sources of privacy invasion that people anticipate? Is it the same privacy invasion if it comes from a company (Google), a government looking for potential terrorist activity, or just an acquaintance who likes to facebook-stalk others?
- Do we hold private companies to a higher standard if we know that they have the means to protect our privacy more? Should such companies be held to the same standard in every country? If not, aren't there problems with information that is accessible in some countries but not in others?
4. Cyber-security
- Cyber-space was first used by script-kiddies as a playground for web defacement, etc, then discovered by criminals as a new means to expand their activity followed by transnational crime syndicates, followed by hackers with a political - "hacktivists" - until eventually also government discovered cyber-space. Since the DDoS attacks on Estonian websites in 2007 pushed the issue in NATO circles, cyber-security has been increasingly in the headlines. A number of questions emerge from this:
- Real threat vs. threat inflation. How much of the what is written in newspaper articles and books is much ado about nothing and what can be considered a real risk? If there is a risk, is there also a threat? What determines what constitutes a threat? Richard Clarke's book "Cyber-war" paints a gloomy picture. Self-interest by an author working as a cyber-security consultant or is there more to it?
- Cyber-crime <-> cyber-espionage <-> cyber-hacktivism <-> cyber-terrorism <-> cyber-war (cyber-intrastate war/cyber-interstate war). Costs today? Costs tomorrow? Technical solutions? Policy/legal solutions? National/international level? State vs non-state actors? Public/private?
- Cyber-war vs. cyber-peace. Why is much of the literature using language such as "cyber-war", "cyber-attack", etc and not language such as "cyber-peace", "cyber-cooperation"
- Terminology. What is the difference between a cyber-hacktivist and a cyber-terrorist? What constitutes a "cyber-attack"? Given cyber-space's virtual borderlessness is it appropriate to speak of defense/offense or active/passive (e.g. the Outer Space convention)? Is cyber-space a territorium like the High Seas, Antarctica or Outer Space? Or a new field after land (army), sea (navy), air (air force), cyber? Is cyberspace a "cultural heritage of mankind"? Relationship between virtual and kinetic.
- Civilian vs military. How is cyber-security changing the relationship between civilian and military? DoD is responsible to defend .mil, DHS responsible to defend .gov. What about the other domains? The German DoD is responsible to defend the German military network, the Ministry of Interior responsible for the government websites. How do civilian Ministries of Interior with police forces relate to a cyber-attack outside the country usually an international attack being the responsibility of the military branch of a democratic government? What are the lines of authority, e.g. for the planting of logic bombs or trapdoors?
- What would the authority of the military be in addressing attacks on civilian networks, if any? Does the government have a role or responsibility to address non-government networks? Structurally and legally, how would implementing this role be done? Would there be any problems of privacy protection and government over-interference?
- If the government is going to take a role in strengthening private network security, which networks should it protect first? Who should be involved in the oversight of this protection--military, civilian gov. actors, private actors?
- Would these actions fall under the Cyber Command?
- The New York Times reports that "the new commander of the military’s cyberwarfare operations is advocating the creation of a separate, secure computer network to protect civilian government agencies and critical industries like the nation’s power grid against attacks mounted over the Internet." [3]
- Are "secure zones" a viable solution to protecting critical infrastructure? Is this an oversimplified vision of secure systems that assumes cybersecurity is analogous to real space? What are the drawbacks to this approach? The article notes that the cyberwarfare commander did not demarcate the line between public and restricted government access.
- Role of private actors. How are ISPs, hardware and software companies integrated into the discussions/policy-/law-making process? How much power do they have? Allegiance to profit? Allegiance to country? Allegiance to open cyber-space? Are there public private partnerships? Do they work? What are their strengths/weaknesses?
- Role of hackers. In the early days, the battle was government vs. hacker or state vs. hacker guided by a hacker ethics. This was before the internet expanded around the globe and in the Western tradition of state vs individual. After the expansion, how has this relationship changed? Is there a transnational hacker-culture or are hackers of country X more closely aligned with government of country X vs hackers of country Y more closely aligned with government of country Y rather than hackers of X and Y aligned vs governments of X and Y?
- With the attribution problem and the transition problem (virtual-physical world) how much security is necessary and how much generativity possible? What can be done to reduce the risk? What can be done to reduce the threat? International convention? Code of conduct among major companies? International confidence-building measures?
- Enforcement. How could an international regime/agency look like solving the security dilemma? A cyber-IAEA? Or could a regime that exists now (such as NATO) be more effective?
- What responsibility would countries have for hackers / attacks originating from their own countries? How could one separate attacks from a private individual (that a country could disaffirm responsibility from) and from a government-sponsored initiative?
- What are the main sources of the threats that the US and other countries are anticipating? Are they state-based? How, if at all, would this affect our relationships and foreign relations with those states?
- What kind of retaliation would be appropriate, once an attack has been discovered? If necessary, would a country engage in counter-cyber attacks, or more traditional retaliation such as economic sanctions or even military action?
- Does the US have special responsibilities for a global safe and free internet? Should it take the lead in preventing attacks in other countries that are less equipped to protect themselves?
Group Two:
-property -online things acquiring IRL value -what happens to digital possessions after death? -who has access to your accounts (fb, twit, gmail, etc) after death -(TOS after death) -first sale doctrine in software -first amendment rights with online comms (going through someone’s infrastructure)
1. Death and digital accounts
- Ars Technica article on how Facebook, MySpace, Twitter, and Google subsidiaries treat death.
- Whole set of websites attempting to address this issue, and allow some users autonomy to make decisions about what happens to their accounts after they are unable to manage them anymore:
- Legacy Locker "is a safe, secure repository for your vital digital property that lets you grant access to online assets for friends and loved ones in the event of loss, death, or disability."
- My Webwill "allows you to make decisions about your online life after death. You can choose to deactivate, change or transfer your accounts, like Twitter, Facebook or your blog. At the time of your death we perform your wishes."
- But given the transience and uncertain future of so many of these companies, can someone really interested in this trust that the sites will still be around? Do you need to keep updating it for every new service that you sign up for which, over the course of several years, will presumably include several different sites? Should it either be a more centralized solution (say, a service provided by the government that has a feeling of more permanence) or more decentralized (say, a last will and testament that you can basically draft on your own).
2. Speech and Censorship
- Speech, Censorship, Statistics. Should we be concerned with an ISPs' and website owners' ability to aggregate and control information and speech. It seems that at least Google thinks that Internet users may be concerned with this topic. Google recently announced the "Transparency Report," which (incompletely) tracks usage statistics by country, as well as Google's removal of online material at the Government's request.Google How should corporations manage such governmental requests. What rules should it apply? How should it decide on a set of rules and whether they are catholic or case specific? What benefits are realized by providing publicly this information--particularly the tracking information? How can users or other entities use this information?
Group Three:
-liability for security breaches (negligent design/management) -wikileaks! (jurisdictional problems, prosecution) (how does filtering affect wikileaks?) -transparency on internet services (google: how does it work?)
1. Liability for Security Breaches and Flaws
- Software insecurity:
- Security guru Bruce Schneier has argued that imposing tort liability is desirable as a method of forcing vendors to internalize the costs of insecure software. See Liability Changes Everything and Computer Security and Liability.
- How convincing is his suggestion? What sorts of costs would this impose on software companies? Would such a rule drive small players out of the security market? Would individual contributors to open source projects potentially face liability?
- Law professor Michael D. Scott makes a similar argument, and notes that Sarbanes-Oxley requires publicly traded companies to certify that their systems are secure, while imposing no obligations on the vendors who actually provide the software. See Tort Liability for Vendors of Insecure Software: Has the Time Finally Come?
- Security guru Bruce Schneier has argued that imposing tort liability is desirable as a method of forcing vendors to internalize the costs of insecure software. See Liability Changes Everything and Computer Security and Liability.
- Database insecurity:
- Summaries of a few recent cases that address database breaches: Developments in Data Breach Liability.
- Law professor Vincent R. Johnson argues that tort liability is an appropriate mechanism for creating incentives and managing risks associated with cybersecurity: Cybersecurity, Identity Theft, and the Limits of Tort Liability. Some issues he raises:
- Duty to protect information: California's Security Breach Information Act imposes such a duty. The obligations that Graham-Leach-Blilely imposes on financial institutions arguably support liability on a theory of negligence per se.
- Can market forces adequately address insufficient database security?
- "Duty to inform of security breaches": This could be analogous to a failure to warn theory of negligence liability.
- The economic harms rule seems to impose a significant bar to recovery. What about requiring the database-owner to pay for security monitoring? A risk-creation theory might support this approach.
- Duty to protect information: California's Security Breach Information Act imposes such a duty. The obligations that Graham-Leach-Blilely imposes on financial institutions arguably support liability on a theory of negligence per se.
--98.210.154.54 23:13, 21 September 2010 (UTC)Davis
2. WikiLeaks
Note: It seems to me, after reading the cybersecurity entry, that the "WikiLeaks" problem could be moved under that category and used as an example of how the Internet can magnify the consequences of a data breach in the physical world. Does anyone else agree? (I'd say no. There are other issues I've added that make it distinct. See below. -- Austin)
- Real-world data breach: Soldier suspected of leaking classified military reporters to whistleblower website WikiLeaks [4]
- This kind of leak is made much more likely by the growth of digital information. Spending two minutes to copy thousands of records to a CD labeled "Lady Gaga" v. making copies of the Pentagon Papers and smuggling them out under your shirt. Do we live in a more "leaky" age? If Wikileaks proves capable of protecting the anonymity of its contributors, should we come to expect that any information that is sufficiently important to public discourse will eventually find its way into the wild?
- If these sorts of leaks become increasingly common, will there be a significant effect on the public's expectations as to government(s) transparency? Will Wikileaks-like organizations become an accepted "unofficial" path to the release of information? Or will an increased public expectation of transparency force less government secrecy (fewer documents classified, documents declassified sooner)?
- What effect will Wikileaks have on transparency in private entities with significant public impact? Will the public or traditional press treat a leak from Monsanto the same way as a leak from the Department of Defense?
- This kind of leak is made much more likely by the growth of digital information. Spending two minutes to copy thousands of records to a CD labeled "Lady Gaga" v. making copies of the Pentagon Papers and smuggling them out under your shirt. Do we live in a more "leaky" age? If Wikileaks proves capable of protecting the anonymity of its contributors, should we come to expect that any information that is sufficiently important to public discourse will eventually find its way into the wild?
- WikiLeaks posts documents without redacting the names of Afghans who provided intelligence to the United States. [5] The Taliban said it was using the WikiLeaks site to comb for names of Afghan informants, while the traditional press/gatekeepers said they had redacted the documents they posted to avoid "jeopardizing the lives of informants." [6] It seems as if the Internet has allowed people to bypass the traditional gatekeepers -- be they the government or the press -- and that this magnified the effects of the real-world data breach.
- Wikileaks _did_ provide the Pentagon the opportunity to redact sensitive information from documents. [7] The Pentagon refused. To what degree should the Pentagon and other groups targeted by Wikileaks and similar organizations be willing to work with those organizations? From the perspective of the targeted group, cooperation with such a group legitimizes it and increases its public statute. From the perspective of the media organization, a working relationship with a targeted group may cause supporters to question the organization's independence. Is there a balance to be struck here? Do the formation of these relationships make "new media" organizations like Wikileaks look too similar to the "old media" organizations that depend greatly on relationships with government officials for content?
- Jurisdictional problems and prosecution: The U.S. government is able to prosecute the real-world leaker, but likely won't be able to prosecute WikiLeaks -- the organization that used the Internet to magnify the effects of the leak, etc. -- because of jurisdictional problems and because of a lack of on-point law. [8]
--JPaul (Jenny)
- What is the role of the primary source in our media landscape? The Wikileaks hasn't quite figured this out yet. In the Collateral Murder video release, for example, Wikileaks was criticized for releasing an edited video alongside the raw footage from a helicopter attack on journalists and civilians in Afghanistan. In the War Diaries release, however, Wikileaks received sharp criticism as described above. What is the role of the "gatekeeper" in today's media? Can primary source documents contribute effectively to discourse or do they provide so much information that a reader cannot process it all? Would the War Diaries release had greater focus if the vast number of reports had been reviewed by crowdsourcing?
- How will Wikileaks shift public opinion with regard to the use of anonymous sources? As some media critics have pointed out, traditional media sources, even while criticizing anonymous or pseudonymous bloggers, rely heavily on the statements of anonymous officials in their reporting. [9] Will there be a backlash against anonymity or will it become more widely accepted? Is there a distinction to be made between anonymous sources and anonymous reporters? How do trust and reputation fit into all this? (this issue fits pretty well into Group 1's anonymity issue above)
- Can the Wikileaks model of funding (releasing a big story and then relying upon donations) be a more widely applied to other forms of independent journalism?
Group Four:
- The legal or regulatory meaning of Net Neutrality Principle
- According to Lessig, there is no fixed architecture for the Internet - they are all "code" written by human. The Net Neutrality principle and Prof Zittrain's concept of a "generative Internet" are attempts to lay down some fundamental values of what Internet architecture, given all possibilities, is the most desirable model that we should pursue.
- Suppose the concept of Net Neutrality is clear (which is actually not), what are the practical regulatory implications of this principle should bring? There are seemingly contradictory practices in the name of Net Neutrality. For example, ISPs prohibit P2P file-sharing, alleging it takes up too much band width and other users are able to use the networks equally. These practical issues remind us of rethinking some fundamental problems on what neutrality means to the Internet.
- Layers in regulations that protects Net Neutrality: Are there certain applications that should be given high priorities to run on the Internet, or should all applications be given the same weight? (For instance, should our policy equates the band width used for emails and web browsing to high-resolution video or gaming?)
-“to what extent is our judgment about tech related to the “coolness” of the tech itself?”
- User Satisfaction versus Company Profitability. Closed platforms like the iPhone present significant benefits at a cost. It may be helpful to frame benefits and costs in terms of user satisfaction and company profitability, rather than any particular feature of the device using the platform. We can, of course, ask about particular features that create or diminish user satisfaction or company profitability, but we won't talk about the features as if they confer some independent benefit. This is just a way of conceptualizing when society will tolerate certain technological constraints.
- The iPhone. Steve Jobs has a vision for the iPhone, and that includes regulating a large portion of what goes on and can go on the phone. Let's take a look at how the user satisfaction/company profitability model applies.
- Profitability. The iPhone's closed platform provides at least two valuable and related benefits. First, it allows Apple to keep its operating environment "safe." Without unauthorized third-party applications--i.e., with all apps being Apple-approved--there is less risk for the introduction and dissemination of malware. This reduces costs for Apple, which doesn't have to respond to consumers whose phones have been destroyed by viruses. A second related benefit is branding. Because Apple can keep its system closed, it can design the environment in which it operates and market that environment as a product. This design means Apple can extract profits form third-party apps by conditioning access upon, among other things, payment. It also makes the company more profitable because Apple can advertise and promote itself as a "safe" place that operates seamlessly. Nevertheless, this raises issues about how far Apple will regulate its platform. Will it simply condition access by third-party applications, or will it go further and monitor its users. If Jobs is concerned that users will upload pornographic pictures on his phone, will the future iPhone be programmed to identify automatically and remove or block such photos? Does Jobs' vision relate to profitability, or simply personal preference? (This last question will be relevant to considering user satisfaction).
- User Satisfaction. For most users, the iPhone's closed platform doesn't seem to cause any immediate problems. There are plenty of cool apps that individuals can download and use. The iPhone certainly scores high on aesthetics, even if some of its features are low on performance. Users tend to love aesthetics, and have overlooked the fact that, for instance, the iPhone can run only one program at a time. The closed platform's safety also provides a benefit to users, who don't have to worry about protecting their phones from malware. So far, user satisfaction is high. The balance between user satisfaction and profitability seems to be in equipoise--for now. The question for the future is whether Apple will close off more territory, and whether its current sectioning will stifle the actions of users in the future. As to the former, Apple might meet substantial resistance from the public if it begins regulating their private behavior more explicitly. As to the latter, the future is hard to predict. If users become more adept with their phones or demand new features that the closed system stifles, Apple may have to modify just 'how' closed its system should be. Of course, it may respond by making even "cooler" design, thereby satisfying users sufficiently to distract attention from the new (or old) restrictions that remain in place. If consumers detect that Jobs' personal preferences are dictating the ways they can use their phones, their dissatisfaction may win the day.
- "Pandora Hour Limits." Pandora's 40-hour (not sure if that is the exact number, but the important part is that there is a limit) limit for free users has had an impact for avid-users, taking away from their satisfaction.
- "Profitability." The only way to take advantage of a freemium model of revenue is to provide users with more incentive to go premium rather than the non display of ads. Users seem to be satisfied with this because of the loophole of just creating new accounts, however, this is also a process not liked by consumers.
- "User Satisfaction." Often times unnoticed, not causing immediate problems. There are plenty ways users have gotten around this restriction, especially by just creating a new account that requires only an email address.
- The iPhone. Steve Jobs has a vision for the iPhone, and that includes regulating a large portion of what goes on and can go on the phone. Let's take a look at how the user satisfaction/company profitability model applies.
Lack of Humans in Online Transactions
The process of purchasing something online has almost become too easy for users and is a process that is generally irreversible especially for a website like Ebay. Amazon introduced one-click purchasing where, with the input of a single word, your credit card is charge and the item is shipped. There is no human contact on the receiving end of a transaction, leading to a significant amount of error and non-intended expenditure. More human contact is needed and the process needs to be slowed down to ensure privacy and accuracy.
-online transaction speed: feature or bug? -lack of humans in online transactions: feature or bug? - Computers and people gone wild! (please don’t google this)
- Should everything be open-source?
- A closed platform means that things can be innovative only within a predetermined limit; that is, we can only work within the realm of the expected (e.g., apps for the iPhone). But some of the greatest innovations have changed the paradigm for innovation completely, the obvious example being the Internet. The cost of closed platforms is that we do not even know what we're missing -- are security and cool apps worth it?
- Alternatively, if everything were open-source, would we face some variant of the tragedy of the commons? (Tragedy of the commons -- In ye olde England, there was a public commons where everyone could let their cattle graze. But because it was a public space, no one took responsibility for it, so all the grass ran out and the place was a mess. Then the commons was privatized, and lo and behold, private ownership meant that the owner now had an investment and interest in the land, so the land became nice and green again. Even if the owner now charged people to let their cattle graze there. [10]) Or is there something different about the ethos of the Internet, or about cyberspace as a space, that makes the tragedy of the commons a non-issue?