Cyberlaw discussion/Day 4: Difference between revisions
Jump to navigation
Jump to search
No edit summary |
|||
Line 4: | Line 4: | ||
== UGC Principles Document == | == UGC Principles Document == | ||
*Obviously, since these principles are from the point of view of the copyright holders, one would expect they would focus more on getting the infringing material down rather than protecting legitimate content. The goal seems to be to use "identification technology" to change the service provider from a passive entity that is simply acting as a conduit for the data packets to one that can differentiate between acceptable and non-acceptable packets. Out of curiosity does anyone know exactly how this "identification technology" works, and how accurate it is? The principles supposedly aim to protect fair use (see point 6) but only seem to provide for blanket removal of content. Depending on how the screening works, it seems like legitimate content could be easily excluded. [[User:Lk37|Lk37]] 14:15, 6 January 2008 (EST) | |||
== EFF/Berkman Fair Use Principles for User Generated Video Content == | == EFF/Berkman Fair Use Principles for User Generated Video Content == | ||
== Letters from Chilling Effects == | == Letters from Chilling Effects == |
Revision as of 15:15, 6 January 2008
Digital Millennium Copyright Act §512
Viacom Complaint Against Google/YouTube
UGC Principles Document
- Obviously, since these principles are from the point of view of the copyright holders, one would expect they would focus more on getting the infringing material down rather than protecting legitimate content. The goal seems to be to use "identification technology" to change the service provider from a passive entity that is simply acting as a conduit for the data packets to one that can differentiate between acceptable and non-acceptable packets. Out of curiosity does anyone know exactly how this "identification technology" works, and how accurate it is? The principles supposedly aim to protect fair use (see point 6) but only seem to provide for blanket removal of content. Depending on how the screening works, it seems like legitimate content could be easily excluded. Lk37 14:15, 6 January 2008 (EST)