Issues of online identity and reputation bankruptcy deserve to be called difficult problems: Eric Schmidt's comment in a Wall Street Journal interview that soon “every young person…will be entitled automatically to change his or her name on reaching adulthood in order to disown youthful hijinks stored on their friends’ social media sites”  stirred a heated debate  (even though when interviewed for this project he asserted it was only a "joke"). The controversy raised in this area provided enough motivation for our team to take a closer look at the problem.
We start by providing some evidence of the importance of online identity and reputation, with a particular focus on looking towards the future and impact of reputation on the internet to future generations. We then dive into the recent scholarship and other proposed solutions, organizing the field using Lessig's framework of Law, Code, Market and Norms  before presenting our proposed solution: a law, supported by code and norms, which we call the Minor's Online Identity Protection Act. We are aware of the potential implications and concerns and present some of them in the following section.
Our solution will ultimately benefit both children and adults. We seek to provide better protection for children, when still minors, from any damaging content that either they or others post about them. Looking forward, we seek to protect adults from earlier "youthful indiscretions" or "hijinks" and other damaging records from when they were minors, analogous to juvenile record sealing and expungement.
Importance of Online Identity and Reputation
As the first generation to grow up entirely in the digital age reaches adulthood, what has already been identified as a difficult problem could soon lead to disastrous results. Already vast amounts of personal content have been digitally captured for perpetuity, and more content is added every day. As one New York Times article aptly observed: "The future money making on the web will not be social networking but information search and destroy."  Others have noted that particular aspects of the Internet effectively take away second chances so that "the worst thing you've done is often the first thing everyone knows about you."  For instance, viral dynamics of the Internet and search engines -- which are able to quickly rank information based in part on popularity -- may amplify reputation damage. Moreover, as major advances in facial-recognition technology continue to develop, there will be a stark challenge to "our expectation of anonymity in public." 
Minors are particularly vulnerable to these advances because they are the least able to fully comprehend the ramifications of online behavior, and are unlikely to exercise the maturity and restraint that we expect of adults (even if such restraint is not always exercised in practice). Congress has already recognized the importance of protecting children in this regard by passing several pieces of legislation, the most relevant of which is the Children's Online Privacy Protection Act (COPPA). One failing of COPPA, however, is that protection ends at age thirteen. While young children may be the least cognitively advanced, it is older children and adolescents who are more likely to have sensitive and damaging content posted about them on the internet (e.g. photos of the youth drinking or smoking with friends, or sexually charged material). More importantly, COPPA has proven ineffective even at shielding the age group it is designed to protect: studies show that as many as 37% of children aged 10-12 have Facebook accounts, and there are approximately 4.4 million Facebook users under age 13 in the United States despite the fact that Facebook's policy is in compliance with COPPA standards. 
The lack of effective protection for minors, coupled with the staggering numbers of minors who post private content on the Internet, begs for a solution. This youthful web-based content also raises serious implications for these people's lives going forward -- given that a recent Microsoft study reports that 75% of US recruiters and human-resource professionals report that their companies require them to do online research about candidates, and many use a range of sites when scrutinizing applicants.  While these reputational problems similarly exist for those who posted content as adults, the problem of online identity and reputation is particularly powerful in the case of minors.
The Recent Scholarship and Other Proposed Solutions
The primary existing law to protect children on the internet is the Children's Online Privacy Protection Act (COPPA). The scope of COPPA is directed to the operators of websites and dictates: "If you operate a commercial Web site or an online service directed to children under 13 that collects personal information from children or if you operate a general audience Web site and have actual knowledge that you are collecting personal information from children, you must comply with the Children's Online Privacy Protection Act." This law, however, is quite narrow and does little in practice to protect children on the Internet. Children under the age of thirteen can simply lie about their dates of birth when creating accounts. As a result, there are no repercussions for content hosts and intermediaries who do not try to prevent even such blatant tactics, as long as they do not have actual knowledge. It should be noted though that the FTC has successfully pursued some financial penalties and settlements. 
Congress has passed additional laws to protect children in the online realm, with differing results. One such law, the Children's Internet Protection Act (CIPA) requires that schools and public libraries employ certain content filters to shield children from harmful or obscene content in order to receive federal funding. The constitutionality of this law was upheld by the Supreme Court in the United States v. American Library Association.  On the other hand, the Children's Online Protection Act (COPA), which was designed to restrict children's access to any material deemed harmful to them on the internet, did not survive its legal challenges. In upholding a lower court's preliminary injunction, the Supreme Could found the law likely to be insufficiently narrowly tailored to pass First Amendment scrutiny. 
In state law, there are some existing tort remedies for privacy violations, not necessary geared towards children. For example, in thirty-six states, there is already a recognized tort for "public disclosure of private fact." Essentially, this tort bars dissemination of "non-newsworthy" personal information that a reasonable person would find highly offensive. Some state laws are more specific. For example, criminal laws forbidding the publication of the names of rape victims (for a discussion in opposition to such laws see Eugene Volokh ). Anupam Chander has argued in a forthcoming article, "Youthful Indiscretion in an Internet Age," that the tort for public disclosure of private fact should be strengthened. (forthcoming in "The Offensive Internet" 2011 ).
Chander also recognizes two important legal hurdles to overcome in strengthening the public disclosure of private fact tort. The first is that the indiscretion at issue may be legitimately newsworthy. This raises serious First Amendment concerns. The second hurdle is that intermediaries are often protected from liability under the Communications Decency Act (CDA). Despite these obstacles, Chander persuasively argues that such protection is needed, particularly in the context of nude images, because the society's fascination with embarrassing content will not abate. Moreover, he observes that an individual's humiliation "does not turn on whether some activity is out of the ordinary or freakish," rather that common behavior can still cause significant personal damage.
Other legal proposals have also been put forward to curb the problems of online identity and reputation. One set of proposals is geared toward employers. For instance, Paul Ohm has supported a law which would bar employers from firing current employees or not hiring potential new employees based on their legal, off-duty conduct found on social networking profiles.   Germany is also considering a law, to protect people's privacy, that would ban employers from mining information on jobs from social networking sites such as Facebook. The law would potentially impose significant fines on employers who violate it. German government officials have noted, however, that the law could be difficult to enforce because violations would be difficult to prove. 
Meanwhile, Dan Solove has considered a legal proposal which would give individuals a right to sue Facebook friends for certain breaches of confidence that violate one's privacy settings.  He finds problematic the fact that the law provides no protection when others wrongfully spread your secrets. He also believes that the United States should adopt a regime which would better protect people from such transgressions. . In some other countries, such as England, the law does provide for broader protection where friends and ex-lovers have breached a duty of confidentiality. 
A more drastic idea, mentioned by Peter Taylor, would be to create a constitutional right to privacy or “oblivion” to allow for more anonymity.  Less radical, but still significant in its own right, Cass Sunstein has proposed "a general right to demand retraction after a clear demonstration that a statement is both false and damaging."  This bears some resemblance to defamation and libel laws, but where those laws would normally only require that the speaker or publisher pay adequate reputation damages, Sunstein's approach is specifically geared towards the removal of the material. In fact, his proposal is largely based on the existing Digital Millennium Copyright Act (DMCA) notice-and-takedown system for unauthorized uses of copyrighted material. 
Some have criticized legal solutions to these issues which would restrict in any way the free flow of information. Eugene Volokh  has expressed concern about highly troubling "possible unintended consequences of various justifications for information privacy speech restrictions."  Volokh observed that children, as Internet consumers, are not capable of making contracts and thus any assent on their part may be invalid. He did not, however, discuss the topic in any detail.
Legal scholars have proposed several code-based solutions to the problem of online reputation. Jonathan Zittrain has discussed the possibility of expanding online rating systems, such as those on eBay, to cover behavior -- beyond the simple buying and selling of goods -- to potentially include more general and expansive ratings of people. The reputation system that Zittrain describes would allow users to declare reputation bankruptcy, akin to financial bankruptcy, in order "to de-emphasize if not entirely delete older information that has been generated about them by and through various systems." 
Alternative code-based proposals would allow for the deletion of content stored online via expiration dates, in an attempt to graft the natural process of human forgetting onto the Internet. A major proponent of this theory is Victor Mayer-Schonberger, who wrote about such digital forgetting through expiration dates in his book “Delete: The Virtue of Forgetting in the Digital Age” . Similarly, University of Washington researchers have developed a new technology called "Vanish" which encrypts electronic messages to essentially self-destruct after a designated time period. .
A less extreme code-based option would be a "soft paternalistic" approach proposed by Alessandro Acquisti, where individuals would be given a "privacy nudge" when sharing potentially sensitive information about themselves online.  This nudge would be a built-in feature for social networking sites, for example, and could either give helpful privacy information to users when posting such content, or contain a privacy default for all such sensitive information that users must be manually switch on. Such a "privacy nudge" has been analogized to Gmail's "Mail Goggles," which is an optional paternalistic feature designed to prevent drunken users from sending email messages that they might later regret. 
Hal Abelson recently provided a useful framework to think about technology that could support information accountability . In particular, the capability to allow users to "manipulate information via policy-aware interfaces that can enforce policies and/or signal non-compliant uses" seems relevant in the context of our proposal.
Some believe that reputation problems on the Internet are best solved by allowing market forces to determine the outcome, uninhibited by other regulations.  In the wake of concern over reputation online, a number of private companies have emerged to defend reputation. One such company is Reputation Defender, created in 2006, to help "businesses and consumers control their online lives."  For a monthly or yearly fee, Reputation Defender claims it will protect a user's privacy, promote the user online, or suppress negative search results about the user.  Companies like Reputation Defender do offer the average individual some measure of protection. However, there are several problems with such market solutions. The first problem is cost. Some features on Reputation Defender carry price tags as high as $10,000, which is far outside the limits of what an ordinary individual can afford to spend sanitizing his or her search results. Even less expensive services appear to cost between $5-10/month, which may be more than many individuals can afford, especially for long-term protection. Another serious issue with this type of market solution is effectiveness. It is unclear which exact tactics such companies use, but for the most part they can only offer short-term solutions -- which will face increasing challenges as advances in technology, such as facial-recognition technology, make it easier to find people online and harder to protect their identities. 
It is also possible that reputational issues could be solved by the development of norms. David Ardia, for example, has argued for a multi-faceted approach, where the focus would be on ensuring the reliability of reputational information rather than on imposing liability. Moreover, he advocates for the assistance of the community - including the online community - in resolving reputation disputes through enforcing societal norms. 
Norms could also be as simple as asking others "Please don't tweet this" when discussing sensitive or private topics, and expecting that norms will compel others to respect such requests.  As the public grows to better understand and become comfortable with technological advances, such norms could help to curb the problem of personal information being leaked online.
Another norm-based solution would be to simply educate the public about reputational and privacy concerns in the context of the Internet. This has the potential to be especially effective with children and youth, who will be growing with advances in technology but may not understand the full repercussions of their actions. One example of an norm-based solution is the European Union's youth-focused education campaign, "Think before you post!" designed to "empower young people to manage their online identity in a responsible way." . After years of pressure, a number of content intermediaries have voluntarily adopted self-regulatory initiatives with a goal to improve minor's safety on social networking sites in Europe.  Recent follow-up reports, however, have demonstrated that the success of the program has been less than resounding. Specifically, despite their promises to do so, a majority of the involved companies failed to implement some of the important changes, such as "to ensure the default setting for online profiles and contact lists is 'private' for users under 18."  Moreover, after many companies added an avenue for youth to report harassment, apparently few of the companies ever respond to such complaints.  Thus, while educational and norm-based proposals are a step in the right direction, they may lack sufficient force to bring about their desired changes.
The "Respect My Privacy (RMP)" proposal, mentioned above in the discussion of code-based solutions, also has norm-based features. The authors note that "an accountable system cannot be adequately implemented on social networks without assistance from the social network itself ( in ). In other words, norms and best practices are required in addition to purely code-based solutions to provide a successful, comprehensive solution.
We are proposing the creation of a personal legal right to control content depicting or identifying oneself as a minor, limited by an objective standard, supported by best-practice code and norms for Content Intermediaries to annotate content depicting minors. The specific proposal and its scope is outlined below.
Minor's Online Identity Protection Act (MOIPA):
MOIPA will take the form of a Notice-and-takedown system. The requester will have to prove that his or her request falls within the scope of MOIPA. We are envisaging a wide adoption by the leading Content Intermediaries to watermark content depicting minors, as described further below. A digital watermark will serve as prima facie evidence. Absent a watermark or other metadata, the requester will have to provide other evidence proving age and identity of the minor depicted or identified in the content in question (via government-issued ID / notary notice). Standardized simple notice forms would be available to then fill out and send online.
It is important to highlight that MOIPA requires affirmative action on the part of the individual. Nothing would be removed or deleted automatically (cf. Juvenile record sealing and expungement).
Supporting Code and Norms:
MOIPA goes hand in hand with a set of best-practice code and norms which will help to simplify the Notice-and-takedown process. Recent advances in technology, particular semantic web technologies, make it possible to digitally tag and augment all forms of content, even text, to provide additional information -- such as identity information of the Individual Depicted, the Content Creator as well as the Content Sharer in addition to date/time information relevant and available. One of the solutions presented in the section above, RMP by Ted Kang and Lalana Kagal , was based on semantic technologies, in particular a custom-made ontology, to annotate content. As in their proposal, we believe a mix of code and norms is necessary to provide a successful complement to MOIPA.
We believe the content could either be automatically tagged to accounts identified as belonging to minors or require an affirmative check box (or both) if the content depicts or identifies minors (similar to “I have accepted the terms of service” or “I have the right to post this content” boxes)
Ideally, every piece of content submitted to a Content Intermediary would have such annotations; this is particularly important in the context of the role of Search Engines, as explained below.
MOIPA would contain an objective limitation to make sure that only reasonable and legitimate requests are honored and to limit any potential abuses of the law. The limitation would also help ensure that MOIPA would pass First Amendment scrutiny if challenged. The limitation would be similar to that seen in many torts, including the public disclosure of private fact tort, which provides a cause of action where the disclosure would be “highly offensive to a reasonable person."
It is important, however, that the law remains relatively simple to implement and does not produce many prolonged legal battles that would waste time and resources. Instead, simple or streamlined ways of enforcing the objective limitation would be preferable. One potential option for the objective limitation that would also co-opt technology would be to crowd-source the determination to a number others (possibly a "jury" of 12) for their opinion on whether the content is objectively embarrassing or unfavorable. More broadly, these outsiders could make a determination simply of whether it is objectively "reasonable" for the individual to request that the identifying or depicting content be taken down. A simple majority finding that the content is embarrassing or unfavorable could suffice. Such participants would not be able to download the content and would be subject to an agreement not to share or disclose the content themselves.
We have identified the following parties to be of importance when discussing our proposed solution.
- Individual Identified or Depicted
The Individual Identified or Depicted is at the center of our proposal and our attention. MOIPA is a personal right designed to be available to an individual with respect to conflicting content about him or her as a minor.
- Content Creator
The Content Creator is the person who creates content i.e. takes a picture and typically has the copyright over the content.
- Content Sharer
The Content Sharer is a person who uploads content to Content Intermediaries. In most cases, Content Sharer and Content Creator will be the same person, but not always.
- Content Intermediaries
Content Intermediaries span personal blogging platforms such as Wordpress or Blogger as well as social networking sites such as Facebook and MySpace. Facebook is already using semantic technologies (without going into too much detail, Facebook has been using RDFa, a semantic web mark-up language to provide additional meta-data about its content ) and could easily expand the provided meta-data to cover privacy/minor information as suggested above. The same is true for popular blogging services such as Wordpress or Blogger.
- Search Engines
Search Engines, such as Google or Microsoft Bing, could play an important part in the successful implementation of our proposal. Google already uses RDFa to augment search results (for more information see, for example ). Similarly, Google could implement a MOIPA best practice directive which would, for example, exclude certain content related to minors, annotated in a particular way. Search engines, because of their vast power over what content people actually find and see on the Internet, are an ideal target for a law like MOIPA. Even a focus solely on search engines, although not our current proposal, is a possible alternative to be considered in the future, especially since it might be easier to implement than the suggested Notice-and-takedown system.
Note: some of these parties could be the same person in a given scenario
In our opinion, the scope of MOIPA has to cover all pieces of content that depict or identify minors. In order to satisfy First Amendment concerns, minors who are of "legitimate public interest" (i.e. celebrities, performers/actors, and children of famous public figures) will be excluded from the scope of our proposal.
- Identity-related content
Within the scope of our proposal, identity-related content can take various formats, ranging from pictures, to videos, to text.
While a general discussion of Star Wars Kid  is not necessarily harmful to the boy, content that links Star Wars Kid's real name to the footage is potentially harmful to him and moreover adds very little public value. While this content should not be removed or deleted automatically, he should have the right to have the identifying information removed.
In our opinion, minors present a particularly compelling case for MOIPA. The age of 18 is a cutoff already recognized by the government and drawn in numerous instances. As a reference point, one could mention the provision to wipe juvenile criminal record (e.g. expungement) .
With respect to the age barrier, four scenarios are possible, as illustrated in the chart below.
(1) Content about oneself and at the time of posting still a minor (younger than 18)
(2) Content about another minor and at the time of posting the poster is a minor (younger than 18)
(3) Content about a minor and poster is an adult at time of posting
(4) Content about oneself and at the time of posting poser is an adult
While our proposal clearly covers scenario (1), (2) and (3), we believe (4) deserves further discussion. While it might be theoretically less compelling (see section below), it is practically easier to implement and more efficient, in our opinion.
There are certain other scenarios when MOIPA would not be available, for example due to contracts that were entered regarding the copyright of the content in question. The chart below provides a framework for what is and what is not covered under MOIPA. The chart is designed from the view of a person looking for ways to remove content about him- or herself.
Is a Legal Measure Necessary?
It is possible that the burgeoning problems of online identity and reputation will dissipate on their own in a natural fashion. Thus, it might not ultimately be necessary to respond with a law or other significant change, so we do not recommend the immediate adoption of our proposal. Instead, it may be worthwhile to observe the trends in hopes that less drastic yet effective solutions appear.
For example, some believe that society will adapt to the ramifications of the Internet and either adjust its (online) behavior accordingly, or such transgressions will become so commonplace that any negative impact will be lost. Much of the information available now, however, suggests that will not be the case. Since the birth of the Web in the early 1990s, the Internet has rapidly become indispensable to a broad cross-section of society -- yet we observe few behavioral changes to protect personal content. If anything, users of all ages have grown more willing than ever to share their private information, personal photos, and more. Yet our sensibilities to social and cultural "transgressions" have remained largely unchanged. Employers continue to conduct Internet and social network research on prospective (and current) employees, and then make hiring and firing decisions based on that information. The public has not grown to accept these circumstances as normal: embarrassing situations continue to be embarrassing and can carry severe consequences, seemingly no matter how universal they are or how many times similar conduct occurs (see, e.g. Chander’s discussion on whether “society will become inured to the problem”). Ideally, society would evolve and adapt to the changes naturally, but so far in this area there has been no significant change. Moreover, children and adolescents are probably the least likely to pick up on these social cues and to understand the full ramifications of their online behavior. Our primarily legal-based solution is thus a possible avenue for government to protect the online identity and reputation of minors absent any other meaningful changes.
Before adopting such a law, which carries some significant ramifications of its own, some time should be taken to see if other solutions will appear. But at the current time such solutions seem unlikely. For example, at present there is no real pressure on content-hosting platforms or search engines to adopt any code-based or norm-based solutions, and there is no reason to suspect that they will spend time, money, and other resources to voluntarily adopt any protective measures that are not required. To the extent that companies such as Reputation Defender exist to solve these problems, they are problematic in that they are (i) limited to only those who can afford them, and (ii) cannot offer full protection because they lack any real power or authority over the sources. Thus, absent any unexpected and meaningful changes, the MOIPA proposal is an effort to adequately protect the most vulnerable segment of the population.
Implications and Potential Concerns
The First Amendment
- Could include exception for minors who are of "legitimate public interest" i.e. celebrities, performers/actors, and children of famous public figures
- On the other hand, protection of minors is a category for which the Supreme Court has recognized a legitimate (and even compelling) state interest (see, e.g., NY v. Ferber; US v. American Library Association; FCC v. Pacifica Foundation)
- Also, only the identifying or depicting information or content would be removed (so, for example, need only redact the minor's name from a discussion; blur, crop or cut minor from photo or video). Other content not related specifically to the minor's identity would remain untouched by MOIPA
In theory, the use of semantic web technologies to annotate content would be sufficient to provide the necessary identification. However, people might scrape content or repost pictures. A combination of both semantic web annotation technology and digital watermarks might therefore offer better protection. However, as with all technical solutions, loopholes will persist (for example, someone taking a picture with a camera of a computer screen and posting that picture).
There is also a question to be raised whether the proposal would result in an abuse and/or overuse of the digital watermarks and metadata. We don't see any strong evidence for that, given the well-defined scope of the proposal.
Notice and Takedown
We believe that the Notice-and-takedown system should not require payment -- the right to control should be available to all. Otherwise, MOIPA could result in a social imbalance where the most well-off have the most "clean" records, or are more likely to engage in "youthful indiscretions" knowing they can make corrections later.
Note that both code-based solutions are solutions ex post. How about preventing or making it more difficult to happen in the first place? Perhaps we could require settings to be invisible-by-default for minor users?
Impact for Content Intermediaries
MOIPA would have a significant impact on content intermediaries, such as search engines and social networking sites, who are significant aggregators of content posted by or about youths. A central question is who is responsible for what? Does an intermediary like Facebook have the same degree of liability as its third-party partners who create apps and make use of Facebook data? There will be advantages and disadvantages, including:
- Public goodwill (parents might feel more comfortable with children using the site)
- Gain additional insights from enhanced user/content meta data
- Potential new market for content management systems: i.e. search, track, and delete material across the Internet
- Penalties for violating MOIPA?
- Could databases and archives include references to deleted data? i.e. write "removed"?
- What if aggregators include copies of photos that lack metadata?
- Differentiate liability of main intermediaries and third-party app makers?
As we consider the effects on intermediaries, meta-data tags for material about minors might have an unintended consequence: encouraging new marketplaces for such material. For example, it may be relatively easy to design an aggregator that could compile photographs of minors then sell this material to third parties for whatever purposes. One application would be a company that compiles photos, then contacts the featured people and requests payment for scrubbing the photos from the Internet (issuing and managing Notice and takedown requests). A more ethically dubious version of this would be a service to spot embarrassing photos (perhaps a crowd-sourced GWAP-style game to enlist taggers of the most humiliating material) and request payment for scrubbing them. These situations are purely hypothetical, yet such unintended consequences should be factored into the design and implementation of MOIPA.
Effects on Behavior of Minors
To insure that MOIPA has a net social benefit, it will be important to educate minors about best practices for posting and removing content now and in years to come. One may assume that if minors are well-educated about MOIPA, they will be less prone to posting "risky" material -- especially about other minors -- out of concern that they may eventually be subject to legal action and need to remove it years or even decades later. Yet it's also possible that new protections afforded to minors' content may have the opposite, unintended consequence: lowering inhibitions and promoting even more risky behavior online. Given their ability to retract photos and text later, teenagers may assume they have a temporary "free pass" to post whatever they wish. Perhaps the deciding factor will be the eventual ease or difficulty of managing material about minors. Youths may be least inhibited if a commercial content management service (yet to be invented) allows posters to easily search dates and friends' names and remove hundreds of offending images with a few clicks.
Implications for Adults
A challenging situation is raised by adults who post material about themselves from when they were minors. Should this material be protected? Theoretically, it's more difficult to justify protecting this material on moral grounds (as society holds adults more responsible than youths for their actions), but on a practical level it may be necessary to extend protections to any material about minors -- regardless of when it is posted.
Given the potential liabilities of posting material about minors, it's possible that this law would have a chilling effect: discouraging adults and institutions from posting any materials about minors. For example, school systems may be reluctant to post class photos -- out of concern that alumni may issue notice and take-down requests to remove themselves from group pictures, forcing the school to either remove or digitally alter the photos. For a single group picture, one can imagine a steady stream of such requests over years and decades, so that the school is forced to progressively remove faces until only a few children remain. Given hundreds of class photos, the task could become daunting. Alternately, it would be far easier (and less expensive) for the school to prohibit the posting of all photography, which could arguably have a detrimental effect on the students and historical records.
Another implication for adults is that parents and guardians might seek to invoke the law on behalf of their children. For example, in the school photo scenario above, one can imagine a mother who does not like the depiction of her 5-year-old son and is empowered to take action against school. Or, more broadly, an over-protective parent asks that photos from birthday parties be removed from a friend's Facebook site. The involvement of parents in controlling material about their children might alter social norms.
Potential for Abuse
MOIPA may also be vulnerable to abuse as a legal lever for personal grievances. For example, if violating MOIPA were grounds for a civil lawsuit, we can envision a scenario where a girlfriend breaks up with a boyfriend, then threatens a lawsuit if the ex-boyfriend does not scrub all records of her from his social networking sites. Likewise, if content intermediaries shared liability, lawsuits against Facebook could become commonplace. As a result, MOIPA may alter social norms (e.g. making users of social networks far more cautious about posting material about lovers). Such lawsuits may also potentially pose an undue burden on the justice system.
Relevant Sources for Further Reading
- Web takes away "second chances": "the worst thing you've done is often the first thing everyone knows about you"
- Supports a strengthening of the public disclosure tort
- Recognizes two principal legal hurdles: (1) the youth’s indiscretion may itself be of legitimate interest to the public (newsworthy) and (2) intermediaries can escape demands to withdraw info posted by others because of special statutory immunity
- "Reputation Bankruptcy" Blog Post, 9/7/2010, on The Future of the Internet and How to Stop it, Jonathan Zittrain
- Suggests that intermediaries who demand an individual's identity on the internet "ought to consider making available a form of reputation bankruptcy."
- Argues that the focus should be on ensuring the reliability of reputational information rather than on imposing liability and advocates community governance
- "Freedom of Speech and Information Privacy: The Troubling Implications of a Right to Stop People From Speaking About You", Eugene Volokh
- Scope: restrictions on communication
- "possible unintended consequences of various justifications for information privacy speech restrictions [...] sufficiently troubling"