Regulating Speech Online: Difference between revisions

From Technologies and Politics of Control
Jump to navigation Jump to search
(UTurn to 1303516799)
 
(29 intermediate revisions by 16 users not shown)
Line 28: Line 28:


With this great potential, however, comes new questions. What happens when anyone can publish to a national (and global) audience with virtually no oversight? How can a society protect its children from porn and its inboxes from spam?  Does defamation law apply to online publishers in the same way it applied to newspapers and other traditional print publications? Is online anonymity part of a noble tradition in political discourse stretching back to the founding fathers or the electronic equivalent of graffiti on the bathroom wall?  In this class, we will look at how law and social norms are struggling to adapt to this new electronic terrain.   
With this great potential, however, comes new questions. What happens when anyone can publish to a national (and global) audience with virtually no oversight? How can a society protect its children from porn and its inboxes from spam?  Does defamation law apply to online publishers in the same way it applied to newspapers and other traditional print publications? Is online anonymity part of a noble tradition in political discourse stretching back to the founding fathers or the electronic equivalent of graffiti on the bathroom wall?  In this class, we will look at how law and social norms are struggling to adapt to this new electronic terrain.   
[http://cyber.law.harvard.edu/is2011/sites/is2011/images/IS2011-3.22.11-Regulating_Speech_Online.ppt.pdf Slides: Regulating Speech Online]


==Assignments==
==Assignments==
Line 45: Line 47:


* [http://en.wikipedia.org/wiki/Reno_v._American_Civil_Liberties_Union Wikipedia on Reno v. ACLU].
* [http://en.wikipedia.org/wiki/Reno_v._American_Civil_Liberties_Union Wikipedia on Reno v. ACLU].
* [http://www.paed.uscourts.gov/documents/opinions/07D0346P.pdf ACLU v. Gonzales], 478 F.Supp2d 775 (E.D.Pa. 2007), read pp. 1-7, 61-74, 82-83; skim pp. 74-81.
* [http://www.socialtext.net/codev2/index.cgi?free_speech Lawrence Lessig, Code 2.0, Chapter 12: Free Speech]
* [http://www.socialtext.net/codev2/index.cgi?free_speech Lawrence Lessig, Code 2.0, Chapter 12: Free Speech]
* [http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1689865 David Ardia, Reputation in a Networked World: Revisiting the Social Foundations of Defamation Law] (Part III)  
* [http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1689865 David Ardia, Reputation in a Networked World: Revisiting the Social Foundations of Defamation Law] (Part III)  
Line 51: Line 52:


==Class Discussion ==
==Class Discussion ==
Regarding the AutoaAdmit case, does anyone have further details on what happened with Anthony Ciolli's countersuit against the two women and their legal advisor? For further reading on cyberbullying, defamation, privacy etc. an excellent book of essays is The Offensive Internet, edited by Saul Levmore and Martha Nussbaum.[[User:Mary Van Gils|Mary Van Gils]] 21:21, 22 March 2011 (UTC)


This comment really applies to a previous class, but you might be interested in reading about the latest counter-tactics in the struggle for a "borderless Internet" against government control in this article: [http://www.economist.com/node/18386151 Unorthodox links to the internet: Signalling dissent] [[User:Smithbc|Smithbc]] 16:56, 19 March 2011 (UTC)
This comment really applies to a previous class, but you might be interested in reading about the latest counter-tactics in the struggle for a "borderless Internet" against government control in this article: [http://www.economist.com/node/18386151 Unorthodox links to the internet: Signalling dissent] [[User:Smithbc|Smithbc]] 16:56, 19 March 2011 (UTC)
Though the introduction to this session states that "nstead of large media companies and corporate advertisers controlling the channels of speech...", we've reached a point where intermediaries--Facebook, Google, etc--are essentially controlling online speech.  Our networks have landed in private, corporate, centralized locations. I hope that we'll be adding intermediary censorship to the discussion :) [[User:Jyork|Jyork]] 00:02, 22 March 2011 (UTC)
Another story in the vein of "AutoAdmit" out right now is at [http://www.smh.com.au/technology/technology-news/cut-and-die-the-web-loves-to-hate-rebecca-black-20110321-1c2tz.html 'Cut and die': the web loves to hate Rebecca Black] About a 13-year old cut-and-paste singer who has become popular on You-Tube for all the wrong reasons; she is receiving death threats via user comments and web discussions. [[User:Smithbc|Smithbc]] 00:23, 22 March 2011 (UTC)
To my knowledge, in US, you have different laws for intermediary liability for speech online (sec 230) and copyright (DMCA), maybe even more. In EU, there are 4 articles in one single act governing liability of ISPs. Especially for hosting providers one specific art. 14. For those interested, here is a link to Ecomerce Directive containing (see art. 12 to 15, hosting providers art. 14) [http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=CELEX:32000L0031:en:NOT]. Comparing art. 14(1)b and art. 14(2) of the EC directive with sec. 320 plus explanation of what is publisher and distributor liability from first reading, conclusion is that in EU, hosting provider would be liable under similarly as a distributor or publisher as in US. There are problems with EU legal framework and liability of ISPs and currently it is under review. If you read art. 14 you might realise what can be problem. There is no explanation of terms, such as 'actual knowledge'or 'expediously'. Or even how should 'notice and take down' procedure look like when comparing it to DMCA. It will be interesting to see how the law will change in future. Hopeully in near future:)As regard to google case in Italy, although I was aware of the issue, I did no read decision and can not say my opinion based only on the article read. However, based on my information, I would say that this was exceptional case in EU, and would not therefore make some outcome about threat in EU only based on this case.[[User:VladimirTrojak|VladimirTrojak]] 16:56, 22 March 2011 (UTC)
FWIW, I meant not intermediary liability, but intermediary censorship; e.g., Amazon's takedown of Wikileaks or Facebook removing Egyptian protest groups.[[User:Jyork|Jyork]] 21:21, 22 March 2011 (UTC)
I wonder whether Section 230 of the Communications Decency Act would follow the footsteps of so-called journalist's privilege. As the emergence of millions of amateur reporters and publishers, the conventional definition of journalist's privilege is rather obsolete now. Likewise, the act which was enacted more than a decade ago seems to not hold the effectiveness any more. There is literally a tremendous number of interactive computer service providers and we have witnessed numerous side-effects burgeoning with the widespread of the online communities. Would it be still okay to give immunity to these providers? --[[User:Yu Ri|Yu Ri]] 19:33, 22 March 2011 (UTC)
[http://cyber.law.harvard.edu/is2011/User:Yu_Ri Yu Ri,] I am disappointed that we ran out of time in class to have the full discussion you propose. Perhaps we can continue in this forum.
From my perspective [http://www.law.cornell.edu/uscode/html/uscode47/usc_sec_47_00000230----000-.html Section 230 of the Communications Decency Act] protecting internet intermediaries has had many undesirable unintended consequences.  At the same time, however, it is impossible to know what today’s internet would be like if Section 230 were not made law and not have survived the [http://en.wikipedia.org/wiki/Reno_v._American_Civil_Liberties_Union Reno v. ACLU] challenge. The internet and the offline world for that matter would surely be significantly different. 
One thing that probably would have happened is that large numbers of companies that today provide internet based intermediary services would not be in the business because of the costs incurred due of the threat of law suits.
What if one of those companies that decided the costs related to the risk of litigation was too high was google? What if google’s investors decided they could make more money by investing in some other industry and chose not to fund google? How different would our world be? I think of things in the real world that might not be the same.
For instance, what might have happened in Egypt if [http://www.cbsnews.com/8301-503543_162-20030485-503543.html Wael Ghonim] had not found a job at google and followed a different career path? Would the changes we are seeing all across the globe have happened if the social networking tools used so effectively by dissidents never came into existence without Section 230?
Very interesting questions and I’d like to hear your thoughts and those of others in the class. Thanks! --[[User:Gclinch|Gclinch]] 02:29, 23 March 2011 (UTC)
A very interesting study on 'Four Phases of Internet Regulation'. It talk about how the concept of internet regulation has changed since its early day to present times:
[http://www.law.harvard.edu/faculty/faculty-workshops/palfrey.faculty.workshop.summer.2010.pdf Four Phases of Internet Regulation][[User:syedshirazi|SyedShirazi]] 21:53, 22 March 2011 (UTC)
In professor Lessig’s [http://www.socialtext.net/codev2/index.cgi?free_speech Chapter 12: Free Speech] he makes the well reasoned proposal that a system to protect children from unwanted speech on the internet would be to implement the browser tag <H2M>. I well understand his reasoning and it makes a great deal of sense. In suggesting how to accomplish universal acceptance of this technique professor Lessig says, “This is the role for government.”
Now I haven’t finished his book and knowing how well he backs his arguments I won’t be surprised to find he has tackled this question but until I get there I must ask: Do we really want government to get into the business of legislating actual code?
Professor Lessig’s point that “code is law” teaches us that the code writer can be the secret hand that regulates us by the choices made when programs are written. For instance when we are in a virtual environment we are limited in what we can do substantially by the choices that the programmer has made when she wrote the program.  I think it is the legitimate role of government to protect us from the undue influence of the coder, especially when the software involved might be used by an intermediary who has been granted special status by [http://www.law.cornell.edu/uscode/html/uscode47/usc_sec_47_00000230----000-.html Communications Decency Act § 230.] 
I’m not so sure that we should carry the logic to the next level by saying that it is government’s role to actually dictate aspects of code. Classmates: what do you think? --[[User:Gclinch|Gclinch]] 01:49, 23 March 2011 (UTC)
Recently during the height of confusion about the disasters in Japan there was a case involving the posting of a highly offensive Youtube video by a UCLA student that at best insensitively mocked Asian peers.  While the school chose not to take disciplinary action claiming that her video did not violate school policy, the community at large took action by shunning and harassing her to the point she withdrew from the school. UCLA is also being criticized by academics of race and gender stating that the objectification of Asians in the video is harmful and displays a deep rooted often overlooked racism that falls outside of the black/white paradigm.  It has been recommended that UCLA should promote a more multicultural understanding and sensitivity by introducing mandatory courses and/or workshops.  I understand how this video and the girl's views are protected as free speech regardless of how repugnant her words are, but I also find it deeply disturbing that the video went viral mainly due to morbid curiosity.  A good analysis of the deeper harmful racist views and effeccts can be read here [http://www.insidehighered.com/news/2011/03/22/ucla_student_s_youtube_video_illustrates_many_asian_racial_stereotypes?loc=interstitialskip UCLA Student's Youtube Video Illustrates Many Asain Racial Stereotypes].  While there may be no legal action that can be taken toward a video of this nature, we as a culture unwittingly make it popular and far reaching through multiple views and backlash videos which is something I think we should all think about before we click on that next "shocking" link.  With such ease of access to such content on the internet I think that there comes a personal responsibility as to what goes viral for the wrong reasons, and what just gets lost in the far corners of the internet. [[User:Deinous|Deinous]] 02:14, 25 March 2011 (UTC)
This article from the NY Times takes issues from the AutoAdmit case even farther. It deals with teens, sexually explicit photos and texting. Again, what speech is protected? What is not? What about cyberbulling among teens as opposed to adults? Is there any kind of legal relief that the target can seek?  http://www.nytimes.com/2011/03/27/us/27sexting.html
--[[User:SCL|SCL]] 14:00, 28 March 2011 (UTC)


== Links from Class ==
== Links from Class ==
Slides for today's class: [http://cyber.law.harvard.edu/is2011/sites/is2011/images/IS2011-3.22.11-Regulating_Speech_Online.ppt.pdf http://cyber.law.harvard.edu/is2011/sites/is2011/images/IS2011-3.22.11-Regulating_Speech_Online.ppt.pdf]

Latest revision as of 16:59, 17 January 2013

March 22

The Internet has the potential to revolutionize public discourse. It is a profoundly democratizing force. Instead of large media companies and corporate advertisers controlling the channels of speech, anyone with an Internet connection can "become a town crier with a voice that resonates farther than it could from any soapbox." Reno v. ACLU, 521 U.S. 884, 896-97 (1997). Internet speakers can reach vast audiences of readers, viewers, researchers, and buyers that stretch across real space borders, or they can concentrate on niche audiences that share a common interest or geographical location. What's more, with the rise of web 2.0, speech on the Internet has truly become a conversation, with different voices and viewpoints mingling together to create a single "work."

With this great potential, however, comes new questions. What happens when anyone can publish to a national (and global) audience with virtually no oversight? How can a society protect its children from porn and its inboxes from spam? Does defamation law apply to online publishers in the same way it applied to newspapers and other traditional print publications? Is online anonymity part of a noble tradition in political discourse stretching back to the founding fathers or the electronic equivalent of graffiti on the bathroom wall? In this class, we will look at how law and social norms are struggling to adapt to this new electronic terrain.

Slides: Regulating Speech Online

Assignments

Assignment 3 due


Readings

Optional Readings


Class Discussion

Regarding the AutoaAdmit case, does anyone have further details on what happened with Anthony Ciolli's countersuit against the two women and their legal advisor? For further reading on cyberbullying, defamation, privacy etc. an excellent book of essays is The Offensive Internet, edited by Saul Levmore and Martha Nussbaum.Mary Van Gils 21:21, 22 March 2011 (UTC)

This comment really applies to a previous class, but you might be interested in reading about the latest counter-tactics in the struggle for a "borderless Internet" against government control in this article: Unorthodox links to the internet: Signalling dissent Smithbc 16:56, 19 March 2011 (UTC)

Though the introduction to this session states that "nstead of large media companies and corporate advertisers controlling the channels of speech...", we've reached a point where intermediaries--Facebook, Google, etc--are essentially controlling online speech. Our networks have landed in private, corporate, centralized locations. I hope that we'll be adding intermediary censorship to the discussion :) Jyork 00:02, 22 March 2011 (UTC)

Another story in the vein of "AutoAdmit" out right now is at 'Cut and die': the web loves to hate Rebecca Black About a 13-year old cut-and-paste singer who has become popular on You-Tube for all the wrong reasons; she is receiving death threats via user comments and web discussions. Smithbc 00:23, 22 March 2011 (UTC)

To my knowledge, in US, you have different laws for intermediary liability for speech online (sec 230) and copyright (DMCA), maybe even more. In EU, there are 4 articles in one single act governing liability of ISPs. Especially for hosting providers one specific art. 14. For those interested, here is a link to Ecomerce Directive containing (see art. 12 to 15, hosting providers art. 14) [1]. Comparing art. 14(1)b and art. 14(2) of the EC directive with sec. 320 plus explanation of what is publisher and distributor liability from first reading, conclusion is that in EU, hosting provider would be liable under similarly as a distributor or publisher as in US. There are problems with EU legal framework and liability of ISPs and currently it is under review. If you read art. 14 you might realise what can be problem. There is no explanation of terms, such as 'actual knowledge'or 'expediously'. Or even how should 'notice and take down' procedure look like when comparing it to DMCA. It will be interesting to see how the law will change in future. Hopeully in near future:)As regard to google case in Italy, although I was aware of the issue, I did no read decision and can not say my opinion based only on the article read. However, based on my information, I would say that this was exceptional case in EU, and would not therefore make some outcome about threat in EU only based on this case.VladimirTrojak 16:56, 22 March 2011 (UTC)

FWIW, I meant not intermediary liability, but intermediary censorship; e.g., Amazon's takedown of Wikileaks or Facebook removing Egyptian protest groups.Jyork 21:21, 22 March 2011 (UTC)

I wonder whether Section 230 of the Communications Decency Act would follow the footsteps of so-called journalist's privilege. As the emergence of millions of amateur reporters and publishers, the conventional definition of journalist's privilege is rather obsolete now. Likewise, the act which was enacted more than a decade ago seems to not hold the effectiveness any more. There is literally a tremendous number of interactive computer service providers and we have witnessed numerous side-effects burgeoning with the widespread of the online communities. Would it be still okay to give immunity to these providers? --Yu Ri 19:33, 22 March 2011 (UTC)

Yu Ri, I am disappointed that we ran out of time in class to have the full discussion you propose. Perhaps we can continue in this forum.

From my perspective Section 230 of the Communications Decency Act protecting internet intermediaries has had many undesirable unintended consequences. At the same time, however, it is impossible to know what today’s internet would be like if Section 230 were not made law and not have survived the Reno v. ACLU challenge. The internet and the offline world for that matter would surely be significantly different.

One thing that probably would have happened is that large numbers of companies that today provide internet based intermediary services would not be in the business because of the costs incurred due of the threat of law suits.

What if one of those companies that decided the costs related to the risk of litigation was too high was google? What if google’s investors decided they could make more money by investing in some other industry and chose not to fund google? How different would our world be? I think of things in the real world that might not be the same.

For instance, what might have happened in Egypt if Wael Ghonim had not found a job at google and followed a different career path? Would the changes we are seeing all across the globe have happened if the social networking tools used so effectively by dissidents never came into existence without Section 230?

Very interesting questions and I’d like to hear your thoughts and those of others in the class. Thanks! --Gclinch 02:29, 23 March 2011 (UTC)


A very interesting study on 'Four Phases of Internet Regulation'. It talk about how the concept of internet regulation has changed since its early day to present times: Four Phases of Internet RegulationSyedShirazi 21:53, 22 March 2011 (UTC)


In professor Lessig’s Chapter 12: Free Speech he makes the well reasoned proposal that a system to protect children from unwanted speech on the internet would be to implement the browser tag <H2M>. I well understand his reasoning and it makes a great deal of sense. In suggesting how to accomplish universal acceptance of this technique professor Lessig says, “This is the role for government.”

Now I haven’t finished his book and knowing how well he backs his arguments I won’t be surprised to find he has tackled this question but until I get there I must ask: Do we really want government to get into the business of legislating actual code?

Professor Lessig’s point that “code is law” teaches us that the code writer can be the secret hand that regulates us by the choices made when programs are written. For instance when we are in a virtual environment we are limited in what we can do substantially by the choices that the programmer has made when she wrote the program. I think it is the legitimate role of government to protect us from the undue influence of the coder, especially when the software involved might be used by an intermediary who has been granted special status by Communications Decency Act § 230.

I’m not so sure that we should carry the logic to the next level by saying that it is government’s role to actually dictate aspects of code. Classmates: what do you think? --Gclinch 01:49, 23 March 2011 (UTC)

Recently during the height of confusion about the disasters in Japan there was a case involving the posting of a highly offensive Youtube video by a UCLA student that at best insensitively mocked Asian peers. While the school chose not to take disciplinary action claiming that her video did not violate school policy, the community at large took action by shunning and harassing her to the point she withdrew from the school. UCLA is also being criticized by academics of race and gender stating that the objectification of Asians in the video is harmful and displays a deep rooted often overlooked racism that falls outside of the black/white paradigm. It has been recommended that UCLA should promote a more multicultural understanding and sensitivity by introducing mandatory courses and/or workshops. I understand how this video and the girl's views are protected as free speech regardless of how repugnant her words are, but I also find it deeply disturbing that the video went viral mainly due to morbid curiosity. A good analysis of the deeper harmful racist views and effeccts can be read here UCLA Student's Youtube Video Illustrates Many Asain Racial Stereotypes. While there may be no legal action that can be taken toward a video of this nature, we as a culture unwittingly make it popular and far reaching through multiple views and backlash videos which is something I think we should all think about before we click on that next "shocking" link. With such ease of access to such content on the internet I think that there comes a personal responsibility as to what goes viral for the wrong reasons, and what just gets lost in the far corners of the internet. Deinous 02:14, 25 March 2011 (UTC)


This article from the NY Times takes issues from the AutoAdmit case even farther. It deals with teens, sexually explicit photos and texting. Again, what speech is protected? What is not? What about cyberbulling among teens as opposed to adults? Is there any kind of legal relief that the target can seek? http://www.nytimes.com/2011/03/27/us/27sexting.html --SCL 14:00, 28 March 2011 (UTC)

Links from Class

Slides for today's class: http://cyber.law.harvard.edu/is2011/sites/is2011/images/IS2011-3.22.11-Regulating_Speech_Online.ppt.pdf