Cyberlaw discussion/Day 4

From Cyberlaw: Internet Points of Control Course Wiki
Jump to navigation Jump to search

Digital Millennium Copyright Act §512

One of the most striking things about §512 is how much more liability service providers have for copyright infringement than defamation. During our discussion on Friday, we talked about how difficult it is to pin any liability on service providers for the content of information being transmitted. §512 treats service providers much more as actors with agency and the ability to control their content, and much less as "mere distributors" as under the defamation regime.

For example, in order to keep their immunity from liability for copyright infringement under §512(c)(1)(A), service providers must act "expeditiously to remove, or disable access to..." copyrighted material if they are informed of infringing activity by their users. Throughout §512, service providers are given similar duties to affirmatively remove or disable infringing material on their servers. Service providers also must have designated agents responsible for receiving complaints of copyright infringement. §512(c)(2).

This liability regime is considerably more strict than the defamation regime. Compare the Seigenthaler story, where Wikipedia had no duty to delete the defamatory material. Or compare the myspace case, where myspace had no liability for the activity of their users on their site, even though that usage ended in a user being raped in real life. Under these regimes, it seems that Congress has decided that copyright infringement is among the worst evils that can be done online. While all online crimes can result in off-line prosecution for the bad actors, service providers are uniquely liable for not policing copyright infringement.

Does this liability regime make any sense? Why are service providers off the hook for defamation, online harassment, viruses, and spam, but expected to police copyright infringement? To me, it seems that the most obvious answer is the power and influence of the entertainment industry. Defamation, online harassment, viruses, and spam harm individual users. Copyright infringement, on the other hand, harms the entertainment industry much more directly than it harms individual users. (Arguably it benefits individual users by giving them cheap or free access to material that they would otherwise have to pay for.) Congress here seems to have responded to a powerful lobby rather than aiming to create a coherent regime for internet governance. Khoffman 14:51, 6 January 2008 (EST)

  • Yeah, the different levels of liability are interesting. They may be justified by the differences between copyright infringement and defamation, though. Both claims can be hard to prove on the margins (the test for fair use is very fuzzy), but infringement is more widespread, and few of these cases have even plausible fair use claims. This could justify the need for OSP intervention for copyright infringement (though I'm not really convinced). And Congress might not be responsible for this one. Arguably, section 230 was meant to leave OSPs with distributor liability, which would be an actual/imputed knowledge standard similar to that in section 512. The courts have read 230 as creating more of an absolute immunity, but this doesn't seem to be what Congress intended. JoshuaFeasel 16:42, 6 January 2008 (EST)


Viacom Complaint Against Google/YouTube

  • To me, the issues presented by this complaint and by the UGC Principles below seem to focus on where the burden should be placed for finding and removing infringing conduct. Like Kelly discusses above, the DCMA seems to, on its face, place some burden on the UGC Service (in this case YouTube) to remove content if they have knowledge that it is infringing on a copyright. However, YouTube's position would seem to be that they have knowledge of the content only if Viacom or another copyright holder informs them. Viacom, in contrast, argues that there should be some duty on the part of YouTube to identify, or at least not interfere with the copyright holder's ability to identify infringing content. I can see both sides of the argument - I don't know that Viacom should be able to shift all the costs to YouTube to enforce its copyright, but I also don't think that YouTube can say that they are not aware of the rampant copyright violations on their website and bear no responsibility for it. What is the appropriate line in these cases? Lk37 15:01, 6 January 2008 (EST)

I thought it was interesting how Viacom worked so hard to build a case for direct infringement, especially arguing that YouTube is publicly presenting copyrighted information. It seems that this is a strategic decision on Viacom's part. Go for direct infringement first, which will make it a lot easier to hold YouTube liable for infringing material on the site. If Viacom can only get indirect infringement, then they may have the burden, under the DCMA, to police YouTube themselves and find the infringing material. On a rapidly changing, user-generated site like YouTube, patrolling content for infringing material is going to be costly. Obviously, Viacom would prefer that YouTube bears this cost.

For me, I was not persuaded by the direct infringement argument. Is the video feature of YouTube that different from a flat text presentation on a site like Wikipedia? Is it merely YouTube's internal technology that makes them more liable than a site like Wikipedia for direct infringement? Is it worse for YouTube to run an episode of SNL when a user clicks play than for Wikipedia to have the full text of a copyrighted play? Admittedly, Wikipedia would probably be nailed to the wall if they had direct copyrighted material as well (I noticed that Wikipedia is filled with warnings about not posting copyrighted material), but I think a copyright holder would have a much harder case arguing that Wikipedia is "presenting" a copyrighted play. To me, there is little difference between text and video in the internet context; both forms of media are found and consumed at the direction of the user. In YouTube's case, like Wikipedia, the material is put up by users, not generated by the admins. I wasn't persuaded by Viacom's argument that YouTube directly infringes when user-generated material is played on the site. Khoffman 18:34, 6 January 2008 (EST)

  • I think this entire lawsuit is merely about Viacom using the threat of possible legal liability to enhance their bargaining power with YouTube/Google. Viacom clearly realizes that there is great demand for its content online and is willing to provide their products online (for example, clips of The Daily Show). Thus, I believe they do not really care about users uploading clips to YouTube - as long as they get a share of the revenue that YouTube/Google makes. Even if their claim is not valid under the law, the mere cost to defend the lawsuit is bound to exert pressure upon Google to up the percentage of revenue they are willing to share with Viacom. The legal system is basically being used by Viacom as a tool to extract rents from Google - which as a matter of policy seems fine, as long as their underlying claim is valid under the law and YouTube/Google is liable. If it is not, as Khoffman suggests, then what should be done? Perhaps an adjustment of the pleading standards for these types of cases, like has been attempted to ease the burden of frivolous lawsuits under the Securities Law's Rule 10b-5? Bhamburger 18:51, 6 January 2008 (EST)

UGC Principles Document

  • Obviously, since these principles are from the point of view of the copyright holders, one would expect they would focus more on getting the infringing material down rather than protecting legitimate content. The goal seems to be to use "identification technology" to change the service provider from a passive entity that is simply acting as a conduit for the data packets to one that can differentiate between acceptable and non-acceptable packets. Out of curiosity does anyone know exactly how this "identification technology" works, and how accurate it is? The principles supposedly aim to protect fair use (see point 6) but only seem to provide for blanket removal of content. Depending on how the screening works, it seems like legitimate content could be easily excluded. Lk37 14:15, 6 January 2008 (EST)
    • At least for music, automated digital fingerprinting technologies do exist and are being used by websites that have user generated content. See Snocap and Gracenote. I do not know if there are automated services for video, but myspace video and youtube do have human screeners looking for porn (and perhaps copyright). The way it works is that a person sits in front of a computer and on a screen thumbnail images of a view frames from the video are displayed. The worker then checks the box if it is ok or if it is not they reject the video. (BTW: This is perhaps the world's worst job...or best if you really like thumbnails of porn). Bhamburger 18:36, 6 January 2008 (EST)
      • I think the UGC Principles document implies that it would be automated technology rather than human screeners - if only because I would assume that porn is much more easily and quickly identifiable to human screeners than copyright issues at the fringes. Lk37 18:53, 6 January 2008 (EST)
        • Are lowly paid humans that act like machines not automated? Bhamburger 19:11, 6 January 2008 (EST)
    • I'm not sure how the screening technology works, but if websites can screen for copyrighted material, wouldn't it be possible for Wikipedia to screen for possibly defamatory material or myspace to screen for inappropriate conversations to preempt real life violence between users? One possible benefit of the copyright wars online is that new technology could be created that would make it easier for all sites to screen for inappropriate content. The bigger question is, would we want other websites to do this kind of screening? Is it better to have users police one another? Is it better to have human moderators who participate in forums and delete inappropriate content? Are we more worried about the Free Speech implications when "dolphins" of permissible speech getting caught in an automated screening when dealing with non-copyrighted material? Khoffman 19:01, 6 January 2008 (EST)
  • Another interesting element to the UGC Principles is the non-signatories. Glaringly absent on the service side are Google and Yahoo with the music industry, Sony and Time Warner absent on the content side. This when read with the recent spate of anti-Google alliances (think the Open Content Alliance and the Microsoft-Viacom ad alliances) is reminiscent of the era when Microsoft, then the dangerous juggernaut, was left out of standards initiatives. The UGC Principles can be regarded as private lawmaking to get around the slow and perhaps ineffective efforts of Congress as seen with the DMCA. For example, DRMWatch hails the UGC Principles for requiring a service provider to provide effective content identification technologies in order to qualify for a safe-harbor, a glaring omission by the DMCA. However, the absence of some of the major content industry players weakens the immunity from liability that the UGC Principles offer service providers who comply. (See Principle 14). Whether or not the UGC Principles take hold remains to be seen although Google was quick to unveil its own identification technology for YouTube the very same week. Jumpingdeeps 16:37, 6 January 2008 (EST)

EFF/Berkman Fair Use Principles for User Generated Video Content

Between the DMCA, the Viacom suit, and the UGC Principles, I think fair use of copyrighted material is significantly overlooked. I wonder how YouTube would handle a borderline parody of copyrighted material. My guess is that the fear of litigation would make a site like YouTube overly cautious and less willing to keep borderline fair use material up on the site. If so, this would be one corner where the generative nature of the internet is eroding. Class project? Khoffman 19:07, 6 January 2008 (EST)

The EFF proposal is designed to prevent "unnecessary, collateral damage to fair use" - but it seems that in the process it is arguably creating "unnecessary, collateral damage to the rights of copyright holders" by erring on the side of caution in taking down infringing content and increasing the costs of screening by prefering human screeners. Why should fair use rights trump copyright? Isn't the threat of YouTube-like services greater to holders of copyrights than it is to those that wish to exercise fair use? Bhamburger 19:10, 6 January 2008 (EST)

Letters from Chilling Effects

  • Re: Tetris - interesting follow-up - if you go to the site listed in the complaint, google tetris has been taken down. Google includes a link to the letter via chillingeffects.com. Check it out here: http://code.google.com/p/mobile-tetris/ Khoffman 19:12, 6 January 2008 (EST)