Cyberlaw discussion/Day 3

From Cyberlaw: Internet Points of Control Course Wiki
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Title 47, Communications Decency Act (Section 230)

  • Part (c)(2)(a) seems to allow "interactive computer service providers" to block any content, regardless of whether it is constitutionally protected. Part (d) states that the provider must further inform users about options to filter on their own. Why not the converse? Shouldn't the ICSP have an obligation to tell its users that certain content is being blocked? Cjohnson 09:12, 4 January 2008 (EST)
    • But the purpose of the legislation seems to be to avoid the incentive system of the Compuserve/Prodigy decisions, where good-faith filtering was punished with increased burdens. An obligation to inform users of blocked information, especially when the ICSP conducts frequent minor edits or may not have effective means of contacting its users (consider if they simply failed to require an e-mail address on registration), could make the filtering that Congress is clearly trying to encourage so burdensome that it'll be abandoned. Kratville 11:01, 4 January 2008 (EST)
      • Fair point. How about a compromise - require the ICSP to broadly inform users (perhaps when they first sign up) that the site operator filters content? ie, that they will not necessarily be receiving a free flow of speech / communication? My concern is that site operators could restrict content in a way that creates a biased world view, without informing users that this is going on... Cjohnson 11:14, 4 January 2008 (EST)
        • Wouldn't users whose content is removed complain of the secret censorship, perhaps on another website? Rlumpau 11:41, 4 January 2008 (EST)
      • Alternatively, wherever a bulletin board -type posting is removed, require the ICSP to put a [*content removed*] filler. That is, not hold the ICSP accountable for directly contacting end-users. Simply require that end-users see *something* where the filtered-out content would have existed. I don't think this would conflict with the legislative intent to make editing less burdensome. Editing would not create broad liability for the ICSP to edit everything - such a requirement is localized to the editing that the ICSP has chosen to exert the energy to undertake. Cjohnson 11:40, 4 January 2008 (EST)

Jonathan Zittrain, A History Of Online Gatekeeping, pp. 257-263

While most of the concern in this case is whether if a site chooses to edit any postings they should be required to be a type of gatekeeper for all of a certain type of content, whether defamatory or otherwise, it seems also to raise the opposite question of to what extent sites have an obligation, constitutional or otherwise, not to remove information. For example, in the article it says that if and OSP acts as a bouncer and removes information in response to complaints, then "those merely offended, but not truly defamed, would be tempted to put OSPs on false notice that the offending material was defamatory and potentially lead the bouncer to overenthusiastically perform his job." I am curious about the extent to which a provider of an interactive computer service has or should have an obligation not to remove or edit content. Perhaps this depends in part, as discussed in the section on Title 47 of Communications Decency Act (Section 230) on what a particular site or service communicates about its policy on editing and deleting posts.--Mvogel 12:26, 4 January 2008 (EST)

Doe v. MySpace, 474 F.Supp.2d 843 (W.D. Tex. 2007), pp. 1-10

This case is interesting in light of a recent news story where Megan Meier, a 13 year old girl, committed suicide after a boy she had been communicating with on MySpace began saying cruel things to her. The profile of Josh Evans was actually set up and maintained by a neighbor in an attempt to see what Megan was saying about her daughter online. The prosecutor in the case has annouced that there are no statutes he can use to prosecute the neighbor, but there did not seem to be much or any dicussion in the situation of whether anyone would sue MySpace over the incident. This is likely because of the result in this case, where the plaintiffs argued that "if MySpace had not published communications...the sexual assault would never have occured." Similarly in this case, presumably if MySpace hadn't published the communications between Megan and "Josh" the relationship and resulting suicide would not have occured. Still, if such a case were brought, a court would almost certainly follow Doe and hold that MySpace wasn't a publisher of the information and is entitled to immunity under the CDA. A link to an article about the Meier MySpace situation is No Charges in MySpace Suicide

Situations like this and in other social networking sites raise an interesting question that can be tied into Paul Ohm's anti-precautionary principle. If we think of people in the social networking context as being either ordinary users who just want to talk to their friends and 'superusers' who want to use the sites to deceive and harm others in some way, the question is whether sites like MySpace should operate the site assuming people are ordinary users or operate it to protect against the superusers. Ohm's anti-precautionary principle to protect against ordinary users only is probably valid here, because the lack of privacy and amount of verification needed to guard against all 'superusers' with bad intent would effectively shut down sites like MySpace. --Mvogel 12:09, 4 January 2008 (EST)

Fair Housing Council v. Roommate.com, 489 F.3d 921 (9th Cir. 2007)

  • I think the court does a good job of literally applying the CDA. It does so by establishing a two-part test to determine whether Roommate is an "information content provider" and therefore not immune from liability: "(1) whether Roommate categorizes, channels and limits distribution of information, thereby creating another layer of information; and (2) whether Roommate actively prompts, encourages, or solicits..." However, I find the implications troubling.
    • The first prong removes the CDA liability shield when the site filters and sends targeted emails to the end user. The user could, theoretically, search through every profile and obtain the same information manually. The Roommate opinion seems to create a disincentive to develop technology that automates an otherwise mind-numbingly inefficient task. That is, it impedes progress.
    • Under the second prong, the CDA liability shield exists when a site requests open-ended free-text. This seems to create a disincentive to develop organized profiles with distinct fields. Such organization facilitates reading/understanding by both humans and computers. Again, the opinion is an impediment to progress. Cjohnson 10:57, 4 January 2008 (EST)
  • I agree that the implication of this opinion is an impediment to progress. Similarly, it seems to leave an entire category of useful web services vulnerable to liability in ways that other sites may not be. There are many situations where people have preferences and sites are only useful if they can help the person find what they are looking for more efficiently and quickly. Regardless of whether people think there are certain preferences that are inappropriate, almost everyone would agree that when choosing roommates, dating prospects, employees, employers, etc., people have different preferences and aren't looking for just any person. But it seems that with this case, the more specific and useful a site makes its forms, questions, and search fields, the more it may be opening itself up to liability.--Mvogel 12:17, 4 January 2008 (EST)


      • I agree that this decision may lead to websites being structured in a different way, but I don't think it's necessarily "an impediment to progress," or even that losing the ability to search particular fields will make it more difficult to search for that information. Take a minute to browse http://www.craigslist.org/about/FHA.html, which is Craig's List's response to concerns that its postings were being used to express discriminatory preferences--basically, they lay out what may and may not be said in an ad for housing or for a roommate, and ask the community to flag posts that violate the FHA. A certain number of flags results in a post's being reviewed and/or deleted. My experience with searching for apartments and roommates on Craig's List leads me to believe that it's easy to search for anything you want to find, and the lack of specific fields for race, gender, and sexual orientation doesn't make it any more difficult to find that information if it's what you're looking for.
      • I also think that the restrictions on asking for specific information in a separate field and making it searchable by category will only apply where acting upon that information is itself prohibited. For example, an investment site could create separate fields that allow investors to search through stocks by a company's primary industry, CEO pay level, number of employees, or quarterly profits without running afoul of any law. I think this decision is pretty narrow; it only opens the door to liability for a website that aids the creation of content that is itself illegal. --Eroggenkamp 13:00 4 January 2008 (EST)
        • My concern is whether future courts will interpret this decision that narrowly. I think that the decision can be read as much broader than that and can be used to impose liability in a much wider range of circumstances, chipping away at the original purpose of the CDA. -- Anna

John Siegenthaler op-ed on Wikipedia

  • (This is perhaps not directly related to the op-ed.) IBM & MIT did a paper on cooperation and conflict in Wikipedia using history flow visualization. The paper noted that mass deletes survive for 7.7 days on average (but median survival time is 2.8 minutes). Of course, as the op-ed suggests, relatively minor factual changes could be much harder to detect and correct. Is there a reliable way to ascertain whether the current version of a Wikipedia article is accurate? Or is it necessary to go back and reread the article every couple of days to watch for changes? Rlumpau 11:59, 4 January 2008 (EST)
  • While, as this article makes clear, Wikipedia isn't perfect, it does in some respects seem to have a built in protection system for significant deception or vandalism. This is because the same articles in Wikipedia that generally tend to attract the most controversy and misinformation are the same ones that people reading and editing Wikipedia have a strong stake in correcting. For example, when someone deletes the entire profile of President Bush and replaces it with name calling, it is not going to sit that way for 4 months unnoticed and uncorrected. The danger is greater for lesser known topics or people that mistakes will remain for longer, but in many cases the danger is less that slander will be put there in the first place because not as many people are interested or invested enough to mess with it. --Mvogel 12:39, 4 January 2008 (EST)

WikiScanner (browse)

I think it is interesting how a site like this shows how cynical many people are about media, government, and corporate organizations that we tend to assume when they edit Wikipedia it is for some type of spin or deception rather than to inform or correct misinformation. --Mvogel 12:29, 4 January 2008 (EST)

    • I don't think that's the automatic assumption by the people that run Wikipedia, since these users have to be reported after they've posted rather than being automatically blocked from putting up information. If anything, it seems like they're more trustworthy than the general populace about the quality of such a user-generated system. --Anna

Chicago Lawyers' Committee for Civil Rights under the Law, Inc. v. Craigslist, Inc. (N.D. Ill. 2006)

This was a similar fair housing case that involved Craigslist, except that the district court in this case upheld immunity under 230. I think the way to distinguish these two is that Roommates.com had forms listing criteria like gender and family status and the ability to target a search or an email alert based on those criteria, whereas Craigslist simply had a space to fill in one's information. Of course, this is a clear instance of the impediment to progress listed above, since Roommates.com was being penalized for making it easier for its users to target their searches and find and convey the information they wanted. [1].--Anna