GROUP TWO: Difference between revisions

From Identifying Difficult Problems in Cyberlaw
Jump to navigation Jump to search
Line 7: Line 7:
==Importance of Online Identity and Reputation==
==Importance of Online Identity and Reputation==


What can already be identified as a difficult problem now, might be a cyber-apocalypse a few years down the road:
What can already be identified as a difficult problem now, might result in a cyber-apocalypse a few years down the road, as a result of a generation reaching adulthood that has been entirely growing up in the digital age, every moment digitally captured for perpetuity:
*"If there are no private pictures of you online, anything made up will seem made up. Treasure your privacy. The future money making on the web will not be social networking but information search and destroy." ("As Bullies Go Digital, Parents Play Catch-Up", New York Times [http://www.nytimes.com/2010/12/05/us/05bully.html?hp])
*"If there are no private pictures of you online, anything made up will seem made up. Treasure your privacy. The future money making on the web will not be social networking but information search and destroy." ("As Bullies Go Digital, Parents Play Catch-Up", New York Times [http://www.nytimes.com/2010/12/05/us/05bully.html?hp])
* In “The End of Forgetting,” Jeffrey Rosen observes that the web takes away second chances so that "the worst thing you've done is often the first thing everyone knows about you" [http://www.nytimes.com/2010/07/25/magazine/25privacy-t2.html]
* In “The End of Forgetting,” Jeffrey Rosen observes that the web takes away second chances so that "the worst thing you've done is often the first thing everyone knows about you" [http://www.nytimes.com/2010/07/25/magazine/25privacy-t2.html]

Revision as of 21:35, 12 December 2010

Issues of online identity and reputation bankruptcy deserve to be called a difficult problem: Eric Schmidt's comment in a Wall Street Journal interview that soon “every young person…will be entitled automatically to change his or her name on reaching adulthood in order to disown youthful hijinks stored on their friends’ social media sites” [1] stirred a heated debate [2] (even though when interviewed for this project he asserted this was only a "joke"). The controversy raised in this area provided enough motivation for our team to take a closer look at the problem.

We start by providing some evidence of the importance of online identity and reputation, with a particular focus on looking towards the future and impact of reputation on the internet to future generations. We then dive into the recent scholarship and other proposed solutions, organizing the field using Lessig's framework of Law, Code, Market and Norms [3] before presenting our proposed solution: a law, supported by code and norms, which we call the Minor's Online Identity Protection Act. We are aware of the potential implications and concerns and present some of them in the following section.

Our solution will ultimately benefit both children and adults. We seek to provide better protection for children, when still minors, from any damaging content that either they or others post about them. Looking forward, we seek to protect adults from earlier "youthful indiscretions" or "hijinks" and other damaging records from when they were minors, analogous to juvenile record sealing and expungement.

Importance of Online Identity and Reputation

What can already be identified as a difficult problem now, might result in a cyber-apocalypse a few years down the road, as a result of a generation reaching adulthood that has been entirely growing up in the digital age, every moment digitally captured for perpetuity:

  • "If there are no private pictures of you online, anything made up will seem made up. Treasure your privacy. The future money making on the web will not be social networking but information search and destroy." ("As Bullies Go Digital, Parents Play Catch-Up", New York Times [4])
  • In “The End of Forgetting,” Jeffrey Rosen observes that the web takes away second chances so that "the worst thing you've done is often the first thing everyone knows about you" [5]
  • With quickly improving facial-recognition technology, there will be a stark challenge to "our expectation of anonymity in public" [6]
  • Viral dynamics of the internet may amplify reputation damage (e.g. search engine rankings)
  • COPPA protection ends at age 13 and has proven ineffective:
  • 37% of children aged 10-12 have Facebook accounts and 4.4 million Facebook users under age 13 in the US despite Facebook policy in compliance with COPPA (see video [7])
  • "75% of US recruiters and human-resource professionals report that their companies require them to do online research about candidates, and many use a range of sites when scrutinizing applicants", recent Microsoft study [8]

The Recent Scholarship and Other Proposed Solutions

Law-Based

The primary existing law to protect children on the internet is the Children's Online Privacy Protection Act (COPPA). The scope of COPPA is directed to the operators of websites and dictates: "If you operate a commercial Web site or an online service directed to children under 13 that collects personal information from children or if you operate a general audience Web site and have actual knowledge that you are collecting personal information from children, you must comply with the Children's Online Privacy Protection Act." This law, however, is quite narrow and does little in practice to protect children on the internet. A child under the age of thirteen can simply lie about their date of birth when creating an account, and there is no repercussions for content hosts and intermediaries who do not try to prevent even such blatant tactics as long as the do not have actual knowledge. It should be noted though that there have been some financial penalties and settlements sucessfully pursued by the FTC. [9]

Other related laws to protect children in the online realm have also been passed by Congress with differing fates. One such law, the Children's Internet Protection Act (CIPA) requires that schools and public libraries employ certain content filters to shield children from harmful or obscene content in order to receive federal funding. The constitutionality of this law was upheld by the Supreme Court in the United States v. American Library Association. [10] On the other hand, the Children's Online Protection Act (COPA), which was designed to restrict children's access to any material deemed harmful to them on the internet did not survive its legal challenges. In upholding a lower court's preliminary injunction, the Supreme Could found the law likely to be insufficiently narrowly tailored to pass First Amendment scrutiny. [11]

In state law, there are some existing tort remedies for privacy violations, not necessary geared towards children. For example, in thirty-six states, there is already a recognized tort for "public disclosure of private fact." Essentially, this tort bars dissemination of "non-newsworthy" personal information that a reasonable person would find highly offensive. Some state laws are more specific. For example, criminal laws forbidding the publication of the names of rape victims (for a discussion in opposition to such laws see Eugene Volokh [12]). Anupam Chander has argued in a forthcoming article, "Youthful Indiscretion in an Internet Age," that the tort for public disclosure of private fact should be strengthened. (forthcoming in "The Offensive Internet" 2011 [13]).

Chander also recognizes two important legal hurdles to overcome in strengthening the public disclosure of private fact tort. The first is that the indiscretion at issue may be legitimately newsworthy. This raises serious First Amendment concerns. The second hurdle is that intermediaries are often protected from liability under the Communications Decency Act (CDA). Despite these obstacles, Chander persuasively argues that such protection is needed, particularly in the context of nude images, because the society's fascination with embarrassing content will not abate. Moreover, he observes that an individual's humiliation "does not turn on whether some activity is out of the ordinary or freakish" but rather than common behavior can still cause significant personal damage.

Other legal proposals to curb the problem of online identity and reputation issues have also been put forward. One set of proposals is geared specifically toward employers. For instance, Paul Ohm has supported a law which would bar employers from firing current employees or not hiring potential new employees based on their legal, off-duty conduct found on social networking profiles. [14] [15] Germany is also currently considering a law that would ban employers from mining information on job from social networking sites such as Facebook to protect people's privacy. The law would potentially impose significant fines on employers who violate it. German government officials have noted, however, that the law could be difficult to enforce because violations would be difficult to prove. [16]

Meanwhile, Dan Solove has considered a legal proposal which would give individuals a right to sue Facebook friends for certain breaches of confidence that violate one's privacy settings. [17][18] He finds problematic the fact that the law provides no protection when others wrongfully spread your secrets and believes that the United States should adopt a regime which would better protect people from such transgressions. [19]. In some other countries, such as England, the law does provide for broader protection where friends and ex-lovers have breached a duty of confidentiality. [20]

A more drastic idea, mentioned by Peter Taylor, would be to create a constitutional right to privacy or “oblivion” to allow for more anonymity. [21] Less radical, but still significant in its own right, Cass Sunstein has proposed "a general right to demand retraction after a clear demonstration that a statement is both false and damaging." [22][23] This bears some resemblance to defamation and libel laws, but where those laws would normally only require that the speaker or publisher pay adequate reputation damages, Sunstein's approach is specifically geared towards the removal of the material. In fact, his proposal is largely based on the existing Digital Millennium Copyright Act (DMCA) notice-and-takedown system for unauthorized uses of copyrighted material. [24]

Legal solutions to these issues which would restrict in any way the free flow of information have been criticized by some, including Eugene Volokh. [25] Among his concerns are that there are highly troubling "possible unintended consequences of various justifications for information privacy speech restrictions." [26] In the context of children on the internet, Volokh briefly observed that children as internet consumers are not capable of making contracts and thus any assent on their part may be invalid. He did not, however, discuss the topic in any detail.

Code-Based

Several code-based solutions to the problem of reputation online have been proposed. Jonathan Zittrain has discussed the possibility of expanding online rating systems, such as those on eBay, to cover behavior beyond simple buying and selling of goods to potentially include more general and expansive rating of people. The reputation system that Zittrain describes would allow users to declare reputation bankruptcy, akin to financial bankruptcy, in order "to de-emphasize if not entirely delete older information that has been generated about them by and through various systems." [27]

Alternative code-based proposals would allow for the deletion of content stored online via expiration dates, in an attempt to graft the natural process of human forgetting onto the internet. A major proponent of this theory is Victor Mayer-Schonberger, who wrote of such digital forgetting through expiration dates in his book “Delete: The Virtue of Forgetting in the Digital Age” [28]. Similarly, University of Washington researchers have developed a new technology called "Vanish" which encrypts electronic messages to essentially self-destruct after a given time period. [29].

A less extreme code-based option would be a "soft paternalistic" approach proposed by Alessandro Acquisti, where individuals would be given a "privacy nudge" when sharing potentially sensitive information about themselves online. [30] The nudge would be a built-in feature for social networking sites, for example, and could either give helpful privacy information to users when posting such content or contain a privacy default for all such sensitive information that must be manually switched by the user. Such a "privacy nudge" has been analogized to Gmail's "Mail Goggles," which is an optional paternalistic feature designed to prevent the user from sending email messages when drunk that he or she will later regret. [31]

Market-Based

Some believe that reputation problems on the internet are best solved by allowing market forces to determine the outcome uninhibited by other regulations. [32] It is true that in the wake of concern over reputation online a number of private companies have emerged to defend reputation. One such company is Reputation Defender, created in 2006, to help "businesses and consumers control their online lives." [33] For a monthly or yearly fee Reputation Defender claims that they will protect user's privacy, promote the user online, or suppress negative search results about the user. [34] Companies like Reputation Defender do offer the average individual some measure of protection. There are several problems with such market solutions, however. The first is cost. Some features on Reputation Defender carry price tags as high as $10,000, which is far outside the limits of what an ordinary individual can afford to spend sanitizing their search results. Even less expensive services appear to cost between $5-10/month, which may be more than many individuals can afford, especially for long-term protection. Another serious issue with this type of market solution is effectiveness. It is unclear the exact tactics that such companies use, but for the most part they can only offer short-term solutions that will face increasing challenges as advances in technology, such as facial-recognition technology, make it easier to find people online and harder to protect their identities. [35]

Norm-Based

It is also possible that reputational issues could be solved by the development of norms. David Ardia, for example, has argued for a multi-faceted approach, where the focus would be on ensuring the reliability of reputational information rather than on imposing liability. Moreover, he advocates for the assistance of the community - including the online community - in resolving reputation disputes through enforcing societal norms. [36]

Norms could also be as simple as asking others "Please don't tweet this" when discussing sensitive or private topics, and expecting that norms will compel others to respect such requests. [37] As the public grows to better understand and become comfortable with new technological advances, such norms could help to curb the problem of personal information being leaked online.

Another norm-based solution would be to simply educate the public about reputational and privacy concerns in the context of the internet. This has the potential to be especially effective with children and youth, who will be growing with advances in technology but may not understand the full repercussions of their actions. One example of an norm-based solution is the European Union's youth-focused education campaign, "Think before you post!" designed to "empower[] young people to manage their online identity in a responsible way." [38]. After years of pressure, a number of content intermediaries have voluntarily adopted self-regulatory initiatives with a goal to improve minor's safety on social networking sites in Europe. [39] Recent follow-up reports, however, have demonstrated that the success of the program has been less than resounding. Specifically, despite their promises to do so, a majority of the involved companies failed to implement some of the important changes, such as "to ensure the default setting for online profiles and contact lists is 'private' for users under 18." [40] Moreover, despite many adding an avenue for youth to report harassment, apparently few of the companies ever respond to such complaints. [41] Thus, while educational and norm-based proposals are a step in the right direction, they may lack sufficient force to bring about their desired change.

Proposed Solution

Overview

We are proposing the creation of a personal legal right to control content depicting or identifying oneself as a minor, supported by best-practice norms and code for Content Intermediaries to watermark content depicting minors:

  • Minor's Online Identity Protection Act (MOIPA):

MOIPA will take the form of a Notice-and-takedown system. The requester will have to prove that his or her request falls within the scope of MOIPA. We are envisaging a wide adoption by the leading Content Intermediaries to watermark content depicting minors, as described further below. A digital watermark will serve as prima facie evidence. Absent a watermark or other metadata, the requester will have to provide other evidence proving age and identity of the minor depicted or identified in the content in question (via government-issued ID / notary notice). Standardized simple notice forms would be available to then fill out and send online.

It is important to highlight that MOIPA requires affirmative action on the part of the individual. Nothing would be removed or deleted automatically (cf. Juvenile record sealing and expungement).

  • Supporting Norms and Code:

MOIPA goes hand in hand with a set of best-practice norms and code which will significantly help to simplify the Notice-and-takedown process. Recent advances in technology make it possible to digitally tag and augment all forms of content, even text, to provide additional information. [42] Additional information provided would be identity information of the Individual Depicted, the Content Creator as well as the Content Sharer in addition to date/time information relevant and available. We believe the content could either be automatically tagged to accounts identified as belonging to minors or require affirmative check box (or both) if content depicts or identifies minors (similar to “I have accepted the terms of service” or “I have the right to post this content” boxes)

Facebook Example.PNG
  • Possible objective limitation similar to that seen in many torts, including the public disclosure of private fact tort, which provides a cause of action where the disclosure would be “highly offensive to a reasonable person."
  • Consider simple or stream-lined ways to implement such a standard to avoid prolonged legal battles. One potential option would be crowd-sourcing to a number others (possibly a "jury" of 12) for their opinion on whether the content is objectively embarrassing or unfavorable. Or, possibly a determination simply of whether it is "reasonable" for the individual to request that the identifying or depicting content be taken down. A simple majority finding that the content is embarrassing or unfavorable could suffice. Such participants would not be able to download the content and would be subject to an agreement not to share or disclose the content themselves.

Involved Parties

We have identified the following parties to be of importance when discussing our proposed solution.

  • Individual Identified or Depicted
  • Content Creator (Person who creates content i.e. takes a picture)
  • Content Sharer (Person who uploads content to Content Storage/Distribution Platform)
  • Content Intermediaries (Personal Blog vs Facebook)
  • Search Engines

Note: some of these parties could be the same person in a given scenario

Scope

In our opinion, the scope of MOIPA has to cover all pieces of content that depict or identify minors. In order to satisfy First Amendment concerns, minors who are of "legitimate public interest" i.e. celebrities, performers/actors, and children of famous public figures will be excluded from the scope of our proposal.

  • Identity-related content

Within the scope of our proposal, identity-related content can take various formats, ranging from pictures, to videos to text.

While a general discussion of Star Wars Kid [43] is not necessarily bad, pieces of content linking Star Wars Kid's real name to the footage are potentially harmful for the person and moreover add very little value. While this content should not be removed or deleted automatically, the person should have the right to have the identifying information removed).

  • Minors

In our opinion, minors present a particularly compelling case. The age of 18 is a cutoff already recognized by the government and drawn in numerous instances. As a reference point, one could mention the provision to wipe juvenile criminal record (e.g. expungement) [44].

With respect to the age barrier, four scenarios are possible, as illustrated in the chart below.

(1) Content about oneself and at the time of posting still a minor (younger than 18)

(2) Content about another minor and at the time of posting the poster is a minor (younger than 18)

(3) Content about a minor and poster is an adult at time of posting

(4) Content about oneself and at the time of posting poser is an adult


alt MOIPA Scope


While scenario (1) to (3) are clearly covered by our proposal, (4) should be discussed further. While it might be theoretically less compelling (see section below), it is practically easier to implement and more efficient, in our opinion.

There are certain other scenarios when MOIPA would not be available, for example due to contracts that were entered regarding the copyright of the content in question. The chart below provides a framework for what is and what is not covered under MOIPA. It is designed from the view of a person looking for ways to remove content about him- or herself.


alt MOIPA Scope


Is a Legal Measure Necessary?

It is possible that the burgeoning problems of online identity and reputation will dissipate on their own in a natural fashion. Thus, it might not ultimately be necessary to respond with a law or other significant change, so we do not recommend the immediate adoption of our proposal. Instead, it may be worthwhile to observe the trends in hopes that less drastic yet effective solutions appear.

For example, some believe that society will adapt to the ramifications of the internet and either adjust its (online) behavior accordingly, or such transgressions will become so commonplace that any negative impact will be lost. Much of the information available now, however, suggests that will not be the case. The internet has been a household staple for many for well over a decade, but no change in behavior has occurred. If anything, young and older users alike have grown more comfortable with the internet and therefore more willing than ever to share their private information, personal photos, and more on the internet. Our sensibilities to transgressions, however, do not appear to have similarly evolved. Employers continue to conduct internet and social network site research on prospective (and current) employees, and then make hiring and firing decisions based on that information. The public has not grown to accept these circumstances as normal over time: embarrassing situations continue to be embarrassing and can carry severe consequences, seemingly no matter how universal they are or how many times similar conduct occurs (see, e.g. Chander’s discussion on whether “society will become inured to the problem”). Ideally, society would evolve and adapt to the changes naturally, but so far in this area there has been no significant change. Moreover, children and adolescents are probably the least likely to pick up on these social cues and to understand the full ramifications of their online behavior. Our primarily legal-based solution is thus a possible avenue for government to protect the online identity and reputation of minors absent any other meaningful changes.

Before adopting such a law, which carries some significant ramifications of its own, some time should be taken to see if other solutions will appear. But at the current time such solutions seem unlikely. For example, at present there is no real pressure on content-hosting platforms or search engines to adopt any code-based or norm-based solutions, and there is no reason to suspect that they will spend time, money, and other resources to voluntarily adopt any protective measures that are not required. To the extent that companies such as Reputation Defender exist to solve these problems, they are problematic in that they are (i) limited to only those who can afford them, and (ii) cannot offer full protection because they lack any real power or authority over the sources. Thus, absent any unexpected and meaningful changes, the MOIPA proposal is an effort to adequately protect the most vulnerable segment of the population.

Implications and Potential Concerns

First Amendment Concerns

  • Could include exception for minors who are of "legitimate public interest" i.e. celebrities, performers/actors, (what about: children of famous public figures)
  • On the other hand, protection of minors is a category for which the Supreme Court has recognized a legitimate (and even compelling) state interest (see, e.g., NY v. Ferber; US v. American Library Association; FCC v. Pacifica Foundation)
  • Also, only the identifying or depicting information or content would be removed (so, for example, need only redact the minor's name from a discussion; blur, crop or cut minor from photo or video). Other content not related specifically to the minor's identity would remain untouched by MOIPA

Technical Limitations

  • Easy to remove metadata?
  • Abuse/overuse of the digital watermark/metadata
  • International standards?

Notice and Takedown

  • Notice-and-takedown system should not require payment -- the right to control should be available to all
  • Both code-based solutions are solutions ex post. How about preventing or making it more difficult to happen in the first place? Perhaps we could require settings to be invisible-by-default for minor users?

Entanglement

  • Entanglement: who "owns" what information about a person and thus what can be managed / deleted, i.e. reposts of images, comments, wall posts

Advantage for Content Intermediaries

  • Public goodwill (parents might feel more comfortable with children using the site)
  • Gain additional insights from enhanced user/content meta data
  • Potential new market for content management systems: i.e. search, track, and delete material across the Internet

Disadvantage for Content Intermediaries

  • Penalties for violating MOIPA?
  • Could databases and archives include references to deleted data? i.e. write "removed"?
  • What if aggregators include copies of photos that lack metadata?
  • Differentiate liability of main intermediaries and third-party app makers?
  • Who is responsible for what? Does an intermediary like Facebook have the same degree of liability as its third-party partners who create apps and make use of Facebook data?

Effects on Behavior of Minors

To insure that MOIPA has a net social benefit, it will be important to educate minors about best practices for posting and removing content now and in years to come. One may assume that if minors are well-educated about MOIPA, they will be less prone to posting "risky" material -- especially about other minors -- out of concern that they may eventually be subject to legal action and need to remove it years or even decades later. Yet it's also possible that new protections afforded to minors' content may have the opposite, unintended consequence: lowering inhibitions and promoting even more risky behavior online. Given their ability to retract photos and text later, teenagers may assume they have a temporary "free pass" to post whatever they wish. Perhaps the deciding factor will be the eventual ease or difficulty of managing material about minors. Youths may be least inhibited if a commercial content management service (yet to be invented) allows posters to easily search dates and friends' names and remove hundreds of offending images with a few clicks.

Implications for Adults

A challenging situation is raised by adults who post material about themselves from when they were minors. Should this material be protected? Theoretically, it's more difficult to justify protecting this material on moral grounds (as adults are more responsible for their actions), but on a practical level it may be necessary to extend protections to any material about minors -- regardless of when it is posted.

Given the potential liabilities of posting material about minors, it's possible that this law would have a chilling effect: discouraging adults and institutions from posting any materials about minors. For example, school systems may be reluctant to post class photos -- out of concern that alumni may issue notice and take-down requests to be removed from group pictures, forcing the school to either remove or digitally alter the photos. For a given group picture, one can imagine a steady stream of such requests over years or decades, so that faces are progressively removed until only a few children remain. Alternately, it would be far easier (and less expensive) to prohibit all photography, which could arguably have a detrimental effect on the students.

Another implication for adults is that parents and guardians might seek to invoke the law on behalf of their children. For example, in the school photo scenario above, one can imagine a mother who does not like the depiction of her 5-year-old son and is empowered to take action against school. Or, more broadly, an over-protective parent asks that photos from birthday parties be removed from a friend's Facebook site. The involvement of parents in controlling material about their children might alter social norms.

Potential for Abuse

  • Is it a civil lawsuit, if they don’t want to take it down? E.g. girlfriend breaks up with boyfriend scenario, potential for abuse; imposing costs on takedown might be helpful here

Economics

  • New market for material about minors?

Relevant Sources for Further Reading

  • Web takes away "second chances": "the worst thing you've done is often the first thing everyone knows about you"
  • Supports a strengthening of the public disclosure tort
  • Recognizes two principal legal hurdles: (1) the youth’s indiscretion may itself be of legitimate interest to the public (newsworthy) and (2) intermediaries can escape demands to withdraw info posted by others because of special statutory immunity
  • Suggests that intermediaries who demand an individual's identity on the internet "ought to consider making available a form of reputation bankruptcy."
  • Argues that the focus should be on ensuring the reliability of reputational information rather than on imposing liability and advocates community governance
  • Scope: restrictions on communication
  • "possible unintended consequences of various justifications for information privacy speech restrictions [...] sufficiently troubling"