Regulating Speech Online: Difference between revisions

From Technologies and Politics of Control
Jump to navigation Jump to search
m (1 revision: Content from IS2013 wiki.)
 
 
(99 intermediate revisions by 22 users not shown)
Line 1: Line 1:
{{ClassCalendar}}
{{ClassCalendar}}


'''February 26'''
'''February 18'''


The Internet has the potential to revolutionize public discourse. It is a profoundly democratizing force. Instead of large media companies and corporate advertisers controlling the channels of speech, anyone with an Internet connection can, in the words of the Supreme Court, “become a town crier with a voice that resonates farther than it could from any soapbox.” (Reno v. ACLU). Internet speakers can reach vast audiences of readers, viewers, researchers, and buyers that stretch across real space borders, or they can concentrate on niche audiences that share a common interest or geographical location. What's more, speech on the Internet has truly become a conversation, with different voices and viewpoints mingling together to create a single "work."
The Internet has the potential to revolutionize public discourse. Instead of large media companies and corporate advertisers controlling the channels of speech, anyone with an Internet connection can, in the words of the Supreme Court, “become a town crier with a voice that resonates farther than it could from any soapbox.” (Reno v. ACLU). Internet speakers can reach vast audiences of readers, viewers, researchers, and buyers that stretch across real space borders, or they can concentrate on niche audiences that share a common interest or geographical location. What's more, speech on the Internet has truly become a conversation, with different voices and viewpoints mingling together to create a single "work."


With this great potential, however, comes new questions. What happens when anyone can publish to a global audience with virtually no oversight? How can a society protect its children from porn and its inboxes from spam? Does defamation law apply to online publishers in the same way it applied to newspapers and other traditional print publications? Is online anonymity part of a noble tradition in political discourse stretching back to the founding fathers or the electronic equivalent of graffiti on the bathroom wall? In this class, we will look at how law and social norms are struggling to adapt to this new electronic terrain.
With this great potential, however, comes new questions. What happens when anyone can publish to a global audience with virtually no oversight? How can a society balance the rights of speakers with the interests in safeguarding minors from offensive content? When different countries take different approaches on speech, whose values should take precedence? When a user of a website says something defamatory, when should we punish the user and when should we punish the website?


'''[https://cyber.law.harvard.edu/is2013/File:2013-02-26-Regulating_Speech_Online.pdf Download slides from this week's class.]'''
In this class, we will look at how law and social norms are struggling to adapt to this new electronic terrain.
 
Joining us this week will be [http://cyber.law.harvard.edu/people/jhermes Jeff Hermes], Director of the [http://www.dmlp.org/ Digital Media Law Project].
 
 
 
== Assignments ==
 
The first half of assignment 2 (posting your prospectus) is due before class ''next week (Feb. 25th)''. Information on the assignment can be found [[Assignments#Assignment_2:_Prospectus|here]].


<onlyinclude>
<onlyinclude>
== Readings ==
; Private and public control of speech online
* [https://www.youtube.com/watch?v=HfS_2oXVch0 Berkman Center, How Internet Censorship Works] (about 7 mins., watch all)
* [http://access.opennet.net/wp-content/uploads/2011/12/accesscontrolled-chapter-5.pdf Ethan Zuckerman, Intermediary Censorship (from ''Access Controlled'')]
* [http://www.newrepublic.com/article/113045/free-speech-internet-silicon-valley-making-rules Jeffrey Rosen, The Delete Squad (New Republic)]


== Readings ==
*  Biz Stone and Alex Macgillivary, [http://blog.twitter.com/2011/01/tweets-must-flow.html The Tweets Must Flow] and [http://blog.twitter.com/2012/01/tweets-still-must-flow.html The Tweets Still Must Flow]
 
* [http://googleblog.blogspot.co.uk/2007/11/free-expression-and-controversial.html Rachel Whetstone, Free Expression and Controversial Content on the Web]


* [http://www.citmedialaw.org/legal-guide/defamation Citizen Media Law Project Legal Guide: Defamation]
; Speech laws and liabilities in the United States


* [https://en.wikipedia.org/wiki/Section_230_of_the_Communications_Decency_Act Wikipedia, Section 230 of the Communications Decency Act]
* [https://en.wikipedia.org/wiki/Section_230_of_the_Communications_Decency_Act Wikipedia, Section 230 of the Communications Decency Act]
* [https://www.eff.org/sites/default/files/cda-ag-letter.pdf Letter to Members of Congress from 49 state and territorial Attorneys General]
; Cross-border concerns
* [http://freespeechdebate.com/en/media/susan-benesch-on-dangerous-speech-2/ Susan Benesch, Dangerous Speech] (audio interview, about 9 mins., listen to all)
* [http://techpresident.com/news/wegov/24189/twitter-hands-over-data-unbonjuif-authors-french-authorities Jessica McKenzie, Obeying French Courts, Twitter Hands Over Identities of Users Who Employed Anti-Semitic Hashtag (TechPresident)]
* [http://edition.cnn.com/2012/09/14/opinion/york-libya-youtube/index.html Jillian York, Should Google Censor an Anti-Islam Video?]
== Optional Readings ==


* [http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1625820 David Ardia, Free Speech Savior or Shield for Scoundrels: An Empirical Study of Intermediary Immunity Under Section 230 of the Communications Decency Act] (Read all of Section I, Parts C&D of Section II, and Conclusion)
* [http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1625820 David Ardia, Free Speech Savior or Shield for Scoundrels: An Empirical Study of Intermediary Immunity Under Section 230 of the Communications Decency Act] (Read all of Section I, Parts C&D of Section II, and Conclusion)
Line 21: Line 52:
* [http://arstechnica.com/tech-policy/news/2009/03/a-friendly-exchange-about-the-future-of-online-liability.ars John Palfrey & Adam Thierer, "Dialogue:  The Future of Online Obscenity and Social Networks" (Ars Technica)]
* [http://arstechnica.com/tech-policy/news/2009/03/a-friendly-exchange-about-the-future-of-online-liability.ars John Palfrey & Adam Thierer, "Dialogue:  The Future of Online Obscenity and Social Networks" (Ars Technica)]


* '''Case Study: The SPEECH Act'''
* [http://scholar.google.com/scholar_case?case=1557224836887427725&q=reno+v+aclu&hl=en&as_sdt=2,22 ''Reno v. American Civil Liberties Union'', 521 U.S. 844 (1997)]


:* [https://en.wikipedia.org/wiki/Funding_Evil#Libel_controversy Wikipedia, ''Funding Evil''] (focus on the “Libel Controversy” section)
* [https://cyber.law.harvard.edu/sites/cyber.law.harvard.edu/files/Evolving_Landscape_of_Internet_Control_3.pdf Hal Roberts et al., The Evolving Landscape of Internet Control]


:* [http://www.govtrack.us/congress/bills/111/hr2765/text 111th U.S. Congress, H.R. 2765, “Securing the Protection of our Enduring and Established Constitutional Heritage Act” (“SPEECH Act”)]
* [http://access.opennet.net/wp-content/uploads/2011/12/accessdenied-chapter-5.pdf Jonathan Zittrain and John Palfrey, Reluctant Gatekeepers: Corporate Ethics on a Filtered Internet (from ''Access Denied'')]


== Optional Readings ==
* [http://www.theatlantic.com/international/archive/2011/09/adapting-us-policy-in-a-changing-international-system/245307/ Anne-Marie Slaughter, Adapting U.S. Policy in a Changing International System]
 
* [http://www.dmlp.org/blog/2012/structural-weakness-internet-speech Andy Sellars, The Structural Weakness of Internet Speech]
 
</onlyinclude>
==Links from Class Discussion==
 
Jeff Hermes' bio: http://cyber.law.harvard.edu/people/jhermes
 
"Incorporation" of the First Amendment against the states: https://en.wikipedia.org/wiki/Incorporation_of_the_Bill_of_Rights
 
Jacobellis v. Ohio ("I know it when I see it"): https://en.wikipedia.org/wiki/Jacobellis_v._Ohio
 
Miller v. California (True obscenity standard): https://en.wikipedia.org/wiki/Miller_v._California
 
Hustler v. Falwell: https://en.wikipedia.org/wiki/Hustler_Magazine_v._Falwell
 
Analysis of Federal Restricted Buildings Act: http://www.snopes.com/politics/crime/restricted.asp
 
NYT v. Sullivan:  https://en.wikipedia.org/wiki/New_York_Times_Co._v._Sullivan
 
Gertz v. Robert Welch, Inc:  https://en.wikipedia.org/wiki/Gertz_v._Robert_Welch,_Inc.


* [http://scholar.google.com/scholar_case?case=1557224836887427725&q=reno+v+aclu&hl=en&as_sdt=2,22 ''Reno v. American Civil Liberties Union'', 521 U.S. 844 (1997)]
US v. Alvarez: https://en.wikipedia.org/wiki/United_States_v._Alvarez


==Links==
Society of Professional Journalists' Code of Ethics: http://www.spj.org/ethicscode.asp


===Links from Adobe Connect Session===
Recent Microsoft issues with search censorship:  http://www.herdict.org/blog/2014/02/13/bing-needs-to-explain-its-search-algorithms/


US Constitution: http://www.archives.gov/exhibits/charters/constitution.html
Duties of Chief Justice of the US Supreme Court: http://en.wikipedia.org/wiki/Chief_Justice_of_the_United_States#Duties


State constitutions: http://www.constitution.org/cons/usstcons.htm
Litigation involving Wikimedia Foundation: http://en.wikipedia.org/wiki/Litigation_involving_the_Wikimedia_Foundation


US regulations are in the Code of Federal Regulations: http://www.gpo.gov/fdsys/browse/collectionCfr.action?collectionCode=CFR
John Seigenthaler Wikipedia Biography Controversy: http://en.wikipedia.org/wiki/Wikipedia_Seigenthaler_biography_incident


Map of the circuit courts and their jurisdictions: http://www.uscourts.gov/uscourts/images/CircuitMap.pdf
Background on the Innocence of Muslims video: https://en.wikipedia.org/wiki/Innocence_of_muslims


The Supreme Court has been taking fewer and fewer cases: http://www.nytimes.com/2009/09/29/us/29bar.html?_r=0
Example of effect of YouTube videos on banks: http://www.theregister.co.uk/2013/01/09/us_banks_ddos_blamed_on_iran


The International Shoe test: https://en.wikipedia.org/wiki/International_Shoe_v._Washington
CDA Section 230: https://en.wikipedia.org/wiki/Section_230_of_the_Communications_Decency_Act


A very influential case addressing websites is a district court case from Pittsburgh called Zippo Manufacturing v. Zippo Dot Com, Inc: https://en.wikipedia.org/wiki/Zippo_Manufacturing_Co._v._Zippo_Dot_Com,_Inc
Reno v. ACLU: https://en.wikipedia.org/wiki/Reno_v._American_Civil_Liberties_Union


A more recent case of "Libel Tourism" - this time concerning parties from Ethiopia: http://blog.indexoncensorship.org/2013/02/25/london-libel-ruling-against-ethiopian-dissident-shows-urgent-need-for-reform/
David Ardia did a pretty thorough review of where we are with CDA 230, about 10-15 years later: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1625820


Shaari v. Harvard: http://masscases.com/cases/sjc/427/427mass129.html
Facebook's content review guidelines: http://gawker.com/5885714/inside-facebooks-outsourced-anti+porn-and-gore-brigade-where-camel-toes-are-more-offensive-than-crushed-heads


This org has done a lot to make sure that online sites have to remove child porn: http://www.missingkids.com/home
Facebook terms of service: https://www.facebook.com/terms.php


The NPR talk show On Point with Tom Ashbrook had a segment on Cyberbullying and Sexual Shaming: http://onpoint.wbur.org/2013/01/28/cyberbullying
Twitter ToS: https://twitter.com/tos


</onlyinclude>
State Attorneys General Letter:  https://www.eff.org/sites/default/files/cda-ag-letter.pdf


==Class Discussion==
==Class Discussion==
{|class="wikitable" border=3 style="margin: auto; background-color:#FFFFCC;"
|align="center"|<big>'''REMINDER'''</big>
|-
|style="background-color:#FFFFFF;"|Your comments must be submitted '''before 4:00PM ET''' on the Tuesday we hold class in order to count for participation credit.  Please see the [[Class Participation|participation policy]] for more information.
|}
<br/>
<div style="background-color:#CCCCCC;">Please remember to sign your postings by adding four tildes (<nowiki>~~~~</nowiki>) to the end of your contribution.  This will automatically add your username and the date/time of your post, like so: [[User:Andy|Andy]] 15:12, 7 November 2013 (EST)</div>
----
"The spread of information networks (the internet) is forming a new nervous system for our planet" - Hilary Clinton.
See:  https://www.youtube.com/watch?v=ccGzOJHE1rw
For governments to react expeditiously to help individuals or communities in distress, there must be freedom of speech online.
But for this to be effective, the process need to be organized and formalized.  Individuals need to ensure they are not sending noises and gibberish but useful information so that either the government or other able individuals, NGO's, or even private corporations can come to the rescue.
[[User:Ichua|Ichua]] 06:57, 12 February 2014 (EST)
----
I have to say, I found "The Delete Squad" article by Jeffrey Rosen to be extremely interesting. While I find hate speech despicable, I agree with the conclusion at which "The Deciders" arrived, to intervene only in rare cases in which resulting violence appeared imminent. In this age of prolific internet bullying, I can see how many people (particularly parents) might be inclined to argue that regulations must be implemented, but to me the solution seems to lie more so in the individual's own usage of the internet. By this I mean to say that a person should be responsible for restricting his or her (or his or her child's) internet usage so that he or she is not actively involved in sites which might be problematic. [[User:Castille|Castille]] 02:26, 15 February 2014 (EST)
:Rosen's article sheds a lot of light on what has become very important content control force in digitally-mediated discussions. For me, the most interesting and troubling aspect of this is the time they take to decide these things. Rosen claims the content review groups at Facebook have on average 20 seconds to evaluate a claim before acting upon it. It is nearly impossible to internalize in such a short period of time the complicated elements Susan Benesch flags to separate the dangerous from the tasteless but far less dangerous - the context, the speaker, the audience, etc. How can they be expected to do in 20 seconds what scholars and courts spend years (and many trees of paper) contemplating in other contexts? (Oh, and to your next post - book recommendations are always welcome!) [[User:Andy|Andy]] 21:40, 17 February 2014 (EST)
:: Yeah, and it also seems unlikely that they have an entire team of lawyers (or other equally "qualified" professionals) working on every single claim. I wonder-- and perhaps I just missed it, if it's mentioned-- whether they favor a more lenient or strict position, on average. It seems "easier"/"safer" to delete anything that becomes an issue and deal with it again if and only if the deletion is contested. I know I've had several friends' profiles, including all of their photos, comments, etc. over the years, deleted and content removed without so much as a notice.
----
This might be a little off-topic, so I apologize in advance if it's "inappropriate", but I was wondering if anyone has read ''The Circle'' by Dave Eggers? These readings-- and my exchange with Ichua on last week's discussion board-- have really made me consider the thoughts posed in that book. Basically, the book is about a company (a la Facebook) which seeks to "complete the circle" of internet usage and identity. It functions as a sort of government in and of itself, as well as a full-fledged community/world. Everything is consolidated on their system, so that people have basically no anonymity online as we do now; the internet is no longer removed from reality, but is instead a virtual reality in the most literal sense. All of their information is stored within the system, including their medical records, family history, purchase history, job details and tasks, and essentially all communication is conducted through the site. There is also a security camera system which is set up and controlled by the users, but has become so prolific that essentially every area of the globe is under surveillance. While the situation posed in the novel is drastic and even scary, there are a lot of positives to certain aspects. I think the biggest concern is not necessarily the loss of privacy, but the question of who controls (or should control) such a system. Certainly controls should exist, but surely corporations should not have that much power or intimate knowledge and it seems that even a government would not suffice for such a job. Should there be another authority? If so, what sort of entity would be qualified to do such a job? I'd love to hear other peoples' perspectives, whether you've read it or not.[[User:Castille|Castille]] 12:55, 17 February 2014 (EST)
----
*'''NOTE 1''' While reading this week's articles, I took a break from homework to scroll down my Facebook newsfeed. I came across a post by a friend in Quebec, about a website that satirizes Snapchat. When I clicked the link, it gave me an error message. I messaged my friend, she was able to open the link with no problem from Quebec. From the comments on her post, it seems as though the only questionable content were some dirty pictures on the site, but nothing I understand to be limited in the USA. That was a bit weird/scary...
*'''NOTE 2''' Now that I am done reading this week's articles, I am more nervous to post my honest response to some of the articles than I used to be!
*'''QUESTION''' Does anyone know the Wiki Markdown version of <code>target="_blank"</code>? I'd be happy to add the markup to the class readings if anyone knows what the code is (I've tried Googling it... no luck...)
[[User:Deluxegourmet|Erin Saucke-Lacelle]] 15:27, 17 February 2014 (EST)
: It is generally considered bad practice in web development to use target="_blank" outside of very specific, exceptional cases. The reason is simple: If the link has no target attribute, the behaviour is defined by user's settings and by user's action as they can either click the link or right click and open in another tab/window/etc., some browsers offering other options such us click&drag, middle click, etc. If the link has a target="_blank" attribute, on the other hand, the user is forced to open the link in a separate tab/window - his actions are thus limited by the developer, for no good reason (even if the developer might think he has a good reason, it usually isn't). --[[User:Seifip|Seifip]] 17:39, 17 February 2014 (EST)
::Thank you for the note Seifip!!! Makes sense, maybe i can play around with Chrome settings & see if I can set it so outside links always open in new tab... Not that I'm too lazy to press the cmd/ctrl key for each link... (well I guess a bit) but my keyboards are all in different languages which confuses the crap out of my typing muscle memory, so I love it when browsers already know which links I want in a new tab (:
::: [https://chrome.google.com/webstore/detail/linkclump/lfpjkncokllnfokkgpkobnkbkmelfefj?hl=en Linkclump] extension is your friend :) --[[User:Seifip|Seifip]] 07:58, 18 February 2014 (EST)
----
As I was considering the intersections of this week’s readings, several articles reminded me of a case that occurred back in 2000, although not within the realm of the Internet or something like the Flickr or Picasa most of us are very familiar with today, the parallels and concerns will seem obvious.
When we think about the amount of daily photographic content that now goes up on Facebook, Flickr, Picasa, etc. and consider the roles of these “Deciders” (as defined in one of the reading), the case as it occurred for an Oberlin, Ohio family back in 2000, seems like it could play out over and over again if individual states received the powers of prosecution to the extent that the State Attorneys General are requesting in their letter to congress on July 23, 2013.
Some may remember the case I’m referring to, in an overly distilled summary, it involved an amateur photographer who was chronicling her daughter’s life in still photography. Some photographs included her (then 8yr old) daughter bathing.  When the photos were developed by the local film-processing lab, a clerk reported this to the police as an incident of “child pornography”. The local police agreed, and the mother was arrested and the case garnered national attention at the time with the ACLU coming to the defense of the mother.
http://www.oberlin.edu/alummag/oamcurrent/oam_spring_00/atissue.html
[Later the subject of an entire book looking more closely at the issues]
http://www.pbs.org/newshour/art/questions-of-photographic-propriety-in-framing-innocence/
The letter by the 49 Attorneys General certainly strikes at a horror that anyone with a human heart will become equally enraged towards - the tragedy of child abuse, sex trafficking, and exploitation. While it seems odd that the word “The State” is omitted from the current language of the CDA,  I wonder if by including “The State” in CDA language, we will end up introduce a sliding scale of laws that become defined by “the standards of any small community” enforcing crimes that THEY define a “Obscenity” and/or “child pornography”.
What is viewed as unprotected speech and deemed as “obscenity” (or “child pornography”) in Lorain County Ohio, may not result in the same definition in (say) San Francisco. With the addition of “The State” in the CDA, could the State of Ohio prosecute a photographer in San Francisco for posting an “obscene” picture to a Flickr account which is accessible to users in Ohio?  If the definition of “obscenity” is based on the Miller’s test (below), then What are the “community standards” that define obscenity in a case where one state wishes to prosecute someone in another “community”??
The Miller test for obscenity includes the following criteria
(1) whether ‘the average person, applying contemporary community standards’ would find that the work, ‘taken as a whole,’ appeals to ‘prurient interest’
(2) whether the work depicts or describes, in a patently offensive way, sexual conduct specifically defined by the applicable state law, and
(3) whether the work, ‘taken as a whole,’ lacks serious literary, artistic, political, or scientific value.
[[User:Psl|Psl]] 17:47, 17 February 2014 (EST)
:Thanks for contributing! Just to clarify, the constitutional definition of actionable obscenity under ''Miller'' has the geographic element to it, which tailors the more general [http://www.law.cornell.edu/uscode/text/18/part-I/chapter-71 criminal statute], but in the realm of child pornography neither the [http://www.law.cornell.edu/uscode/text/18/2252 criminal statute] nor the [https://en.wikipedia.org/wiki/New_York_v._Ferber First Amendment doctrine] base liability on community standards. So while obscenity can very state to state, child pornography does not. (And both are illegal at the federal level.) [[User:Andy|Andy]] 18:47, 17 February 2014 (EST)
----
I have a greater appreciation for the issues involved in online free speech after this week's article.  I somewhat disagree with Zuckerman's conclusion that private limitations to speech in private spaces is "Dangerous for a public society," in that I believe that private companies need to be able to define what is or isn't acceptable communication within their own environments--we're guests in these areas, and it's up to companies owning the spaces to decide what sort of environment their guests are going to experience.  On the other hand, I don't think it can be the government that defines what's acceptable--it needs to be up to the individual owners of these spaces.  I'm concerned about any encroachment on an individual or private enterprise's ability to decide what rules are appropriate for itself.  While I find the content of, say, a site like Stormfront (a white separatist website) to be totally repugnant, I would defend their right to publish what they do--if anything, it simply exposes their nonsense to public scrutiny and criticism.
I am sympathetic to Benesch's thinking about "dangerous speech," and in particular it does make sense that the context (speaker, political environment, proximity to sensitive events, lack of competition/criticism) can make hate speech turn into something more insidious.  Nevertheless, I'm unable to think of a good solution that doesn't actually make things worse.  She claims to defend freedom of expression yet is able to make a distinction between expression and freedom of the press (dissemination).  I find myself unable to disentangle the two.  When one considers the international aspects, and the potential for international lawsuits (such as the French cases we've discussed) it seems like it would be unusually hard to apply her test to speech and protect the right of companies in places such as the United States to publish things that someone might claim to be "dangerous" elsewhere.  For example, would the Chinese government find it to be "dangerous" if the customers of Twitter posted content about how there should be an end to single-party rule?  Where do we draw the line?  It's clear that not only are there the interests of certain governments at stake (and their authoritarian approaches to speech) but also the simple fact that some countries (such as the Rwanda example) may not have the institutions or cultural heritage to handle US-style free speech; yet it is it fair to force US companies to account for all of these cross-border and cross-cultural differences?
[[User:Jradoff|Jradoff]] 20:08, 17 February 2014 (EST)
----
I also found myself somewhat sympathetic to Bensech's concern about dangerous speech.  However, it is unfair and implausible to make US companies responsible for such cross-border/cultural differences.  It is bad for business and generally not a policy I would deem logical.  The way I see it, should a company be held liable for slander that someone says while in their establishment or be punished for someone who spray paints a hate message on the company's door? Although businesses can take precautions to try to prevent such occurrences, to do so over the internet is a much more painstaking task.  Furthermore, I think the bounds of what constitutes "hate speech" is being stretched to some degree.  Constitutionally and as many Supreme court cases have favored, freedom of speech is protected so long as it does not "incite violent action".  For example, to instruct people to harm someone of a certain race would be considered unlawful. In my mind, that is where the line must be drawn. 
Though, as others have mentioned, internet bullying is becoming more widespread and has resulted in teen suicides and possibly contributed to the uptick in school shootings as some have theorized.  Still, to what degree should we be prosecuting internet hacklers for this behavior?  As Professor mentioned in class, once an incident occurs Congress tends to look for an immediate remedy via legislation when it may not necessarily be the answer.  Of course I find it horrible and morally repugnant that someone would bully an innocent person online but does this mean that every bit of our speech should now be scrutinized and if we, for example, call someone fat online we should be given a misdemeanor? If our society deems legal recourse for online bullying, it will become quite convoluted in staking out the levels and appropriate punishments for each offense.  Should a few "bad apples" online ruin or impede the benefits of free internet speech for the masses of good people in society who thrive off of our shared knowledge?  Should McDonald's cheeseburgers be illegal to protect those who struggle from obesity?  No matter how you frame it, more restrictions will eventually equate to more inhibition for companies and citizens alike.  Such inhibition, I argue, thwarts a society's economic and intellectual growth. 
--[[User:AmyAnn0644|AmyAnn0644]] 10:34, 18 February 2014 (EST)
:I'm really glad you brought up the issue of bullying! This is an area where the Berkman Center's Youth and Media Lab have been doing [http://cyber.law.harvard.edu/publications/2012/kbw_bulling_in_a_networked_era some great research] around framing, understanding, and assessing efficacy of solutions to bullying. [[User:Andy|Andy]] 11:15, 18 February 2014 (EST)   
:I agree with your points, AmyAnn, about the difficulty of dealing with bullying and regulating harassment online without stifling speech. The reading I've done on this issue, which has been more about harassment of women and not children, highlights the need for enforcement of what laws we do have. It's not that we need more laws, it's that we need the existing ones to be understood in the context of the Internet and to be enforced by the authorities. Amanda Hess wrote a really wonderful piece about her experience with this that I think I mentioned during one of the first weeks of class, which is long but well worth the read. [http://www.psmag.com/navigation/health-and-behavior/women-arent-welcome-internet-72170/#.Us2TKsXlSF4.twitter] Lindy West wrote a follow up for Jezebel [http://jezebel.com/we-must-not-shut-up-about-how-women-are-treated-on-the-1496622407], which gives a quick overview and her own commentary. [[User:Jkelly|Jkelly]] 12:43, 18 February 2014 (EST)   
Thank you for sharing these sources!  It is refreshing to see how more people are getting involved in spreading the message about cyber-bullying and I believe communication and public awareness initiatives are crucial in combating these issues, particularly in targeting the most vulnerable and dominant population on the web (the youth). The modern parent has more to consider in raising children with regular access to the cyber world both from the perspective of the victim and in preventative measures.  A recent pew survey noted that 90% of teens had witnessed cyber-bullying yet did nothing about it.  Imagine how many lives would be saved if everyone took a stand against cyber-bullying.  Then again, I suppose the children did not know what to do or who to report their observations to; one might think to inform the student's parents but perhaps the teen did not know the parents?  What action could this 90% of teens have taken? Call the police and on what grounds?  At first blush, 90% of teens not reporting bullying seems like an awful statistic, but when considering the lack of direction or guidance in knowing (as a society) how to deal with these matters legally, it all trickles down and muddles the situation to the point where a concerned citizen may not be able to effectively help his fellow cyber-victim.  In any event, without communication, these teens may not even recognize cyber-bullying to begin with and may become "desensitized" to a point where it may not even cross their mind.  Communication is critical for our community to even be aware of what goes on in cyberspace, but as Jkelly mentions, all of the communication and education still cannot trump the lack of enforcement or clear legal path on dealing with these issues. 
Has anyone seen the documentary "Submit"? It was created by parents of internet-bullying victims and the production discusses just how dangerous the bully's "arsenal" online has become when considering how one can, at worst case scenario, completely destroy someone's social standing, career, and identity.  The "arsenal" they say is dangerous because it is both "vast" and "at a distant" offering a bully the prime environment in which to operate. 
Here is the link for the documentary for those interested:
http://www.submitthedocumentary.com/
--[[User:AmyAnn0644|AmyAnn0644]] 14:08, 18 February 2014 (EST)
       
----
While I find Susan Benesch's pursuit of a more nuanced definition of free speech quite commendable, I find that her definition of dangerous speech is prone to subjective assessment and can lead to excessive censorship. Some of the factors, such as the charisma of the speaker, are difficult to assess and are shared between speakers for bad and good causes. Other factors, such as historical context, are equally less than ideal as history is not a constant, a fact, but rather something defined by the state and current generation based on its limited knowledge of the past and current view of the events. The way we see and interpret history changes virtually every decade, and it would be nice if the view of what constitutes dangerous speech was not tied to such an uncertain factor. --[[User:Seifip|Seifip]] 08:11, 18 February 2014 (EST)
:Great points, [[User:Seifip|Seifip]], and I suspect Susan would agree with you that there is still a gap between what factors should and shouldn't matter, and how that translates to policies, procedures, and rules for monitoring against dangerous speech. The tie between the substantive and procedural issues around freedom of expression is a fascinating place to explore at some depth. [[User:Andy|Andy]] 11:15, 18 February 2014 (EST)
----
I found the reading this week really interesting as I am from the country that pioneered Internet censorship, China.  To be exact, I am from Hong Kong, one of the Special Administrative Regions of China. For those who are not familiar with the history of Hong Kong, it used to be a colony of Britain and China resumed sovereignty in 1997. Hong Kong is under the principle of “One County, Two Systems”, which means that it has a different political, legal and economical system from China and will be maintained that way for at least 50 years.
Facebook, Twitter, New York Times have been on the blocked websites list in China because they are “politically sensitive”. Instead, they created their own social networking tools, Weibo. There are a couple different Weibo that launched by different companies, but all of them are in cooperation of the Internet Censorship in the People’s Republic of China.
WeChat,a popular messaging app for smart phone which is similar to WhatsApp, Line, Facebook Messenger etc, is also under censorship. Messages that contain some keywords will be filtered and blocked. Users who send those messages will receive a message saying” The message you sent contains restricted works. Please try again”.
In September last year, The Chinese Government finally allows a small selection of people to access those banned websites including Facebook and Twitter. However, the small selection of people means people that live in that specific 17 square mile area of Shanghai. Many say this is a great start of the revolution, but I am not as optimistic as the rest. I do acknowledge the changes that have been made in years, however, I believe this incident is only a one-time exception that the government made.
[[User:Jolietheone|Jolietheone]] 03:13, 18 February 2014 (EST)
----
WHY WE CANNOT TRUST EVERYTHING ON THE SOCIAL MEDIA:  OF FREE SPEECH AND LIES
http://news.asiaone.com/news/singapore/pm-lee-untruths-spread-through-social-media-hard-correctE
But rather than other people or web robots doing the filtering, we should be teaching our young people how to filter good and reliable information from bad ones, especially on social media.
[[User:Ichua|Ichua]] 11:53, 18 February 2014 (EST)
Ichua - I could not agree more!  It is hard enough for educated adults to filter through the propaganda spewed on the web; I can only imagine how a child would struggle with this.  Even the most reputable websites have had instances where misinformation or biased information has been reported.  Educating our youth on cyber material will make or break our country (and world's) future both within the cyber world and the real world. 
--[[User:AmyAnn0644|AmyAnn0644]] 08:33, 19 February 2014 (EST)
----
Following up on Andy and Castille's comments regarding content review and concern over the speed of content removal, I found Rachel Whetstone's entry about Google's policy regarding free expression and regulating speech particularly interesting. Whetstone emphasizes the importance of community, and the relative speed and accuracy of hate speech/ inappropriate content regulation by the millions of google users who self-police their given online communities. She acknowledges the potentially problematic dynamic of subjective judgment of what is deemed inappropriate, but I strongly agree that the majority of users- especially those who actively and regularly engage in any number of online communities- will agree about what is acceptable and what is offensive. Castille brought up concerns over cyber bullying and parental supervision/ intervention-- I would hope that the majority of parents would have similar responses to what is deemed unacceptable content when they encounter it. Though the ability to consider, deliberate and process each case of potential content regulation or removal is indeed limited when the average content review period on platforms such as Facebook is 20 seconds (referenced by Andy), I still would trust the ability of a community of regularly engaged and informed reviewers to regulate appropriate content.
[[User:akk22|akk22]] 11:50, 18 February 2014 (EST)
While self-policing within a given online community is an ideal way of regulating instances of hate speech, this clearly does not always happen.  Partly because citizens may not know how to police such behavior and also because the internet is such a vast sphere that human regulation in its fullest extent has become somewhat unrealistic even if every cyber-goer were moral and acted upon such values.  The greatest concern is how many crimes (particularly school shootings) could have been prevented if officials would or could do more to act on the "warning signs" often present on a teenager's social media sites.
A recent article below posted by Staten Island discusses how lack of proactive policy has obstructed investigations.  For example, if a student is reported for violent content posted online, it is solely up to the discretion of the school Principal to take action or dismiss the behavior as child's play; this is true even if an explicit threat is made.  In one instance a threat was posted and the Principal chose to ignore it because he/she did not know what could be done.  This is an issue because a Principal is not formally trained in law enforcement and making these types of decisions comes with an enormous amount of responsibility.  In the case of the article below, law enforcement stepped in and conducted an interrogation determining that the posting was nothing more than a hoax.  Determining this, however, would be extremely difficult for a Principal without the tools and training of a law enforcement officer.     
http://www.silive.com/news/index.ssf/2014/02/post_716.html
--[[User:AmyAnn0644|AmyAnn0644]] 13:53, 20 February 2014 (EST)
----
NEW IDEA - ONLINE SOFTWARE FOR BUILDING THE COUNTRY FROM COLLABORATIVE FREE SPEECH
I am thinking of Soft Systems approaches used in operations research such as the use of "cognitive maps" described by Colin Eden (UK).  If there is an issue of national interest, we could have every interested person contribute to an interactive online cognitive map which has a "revert-to-earlier-version" function like in Wikipedia.  That way whoever contributes would have a sense of ownership of the map.  Positive or negative influence of one factor on another can be indicated by "+" and "-" signs and strength of relationship can be  shown with line thickness of the arrows.  The contributor's name and his reasons or evidence for the added link could be displayed by clicking on the connecting arrow.  Well, this idea is not really new as Colin Eden had developed a software for this called COPE...but this will need to be enhanced with the additional features suggested.....Also, if one contributor says "A ---->+ B" and another disagrees, the map could be modified with a second link from A to B as "A ---->+ C ----> -B", while still retaining the original link.  Most probably a detailed read of the description of the first link would lead one to suggest "A ----> -D ----> +B" as a replacement for the original link.  Thus, the map will give us a "richer" picture of the elements affecting a particular issue as new links are added.
See:  "Using Cognitive Mapping for Strategic Options Development". ( in 'Rational Analysis for a Problematic World', Jonathan Rosenhead (ed.)). Wiley 1989.
[[User:Ichua|Ichua]] 12:15, 18 February 2014 (EST)
----
In related news... [http://www.bbc.com/sport/0/winter-olympics/26223586 Team GB want social media protection] --[[User:Seifip|Seifip]] 12:16, 18 February 2014 (EST)
----
In reviewing the readings for this week, and digging deeper into the subject area; I walked away with a true appreciation of a topic that I believed was easily defineable. Perhaps this is indicative of the escalated polarization of issues and beliefs that we are currently experiencing. Bensech's concern about dangerous speech made an argument that I welcomed to entertain. After more thought, I began to question the notion of censorship and the ultimate guideline for who decides what is acceptable. I am uncomfortable with any corporation placing limitations on private speech. I am more comfortable with the cultural norms of the local community self-regulating. Realizing this may not be perfect, to err on the side of the collective conscious seems a much better path to civility.[[User:VACYBER|VACYBER]] 13:33, 18 February 2014 (EST)




In watching last week's lecture on the topic of free speech on the Internet it only reminded me that if the United States is to become the leader, or even respected in the Internet world we must design our laws and programs more to promote collaboration with other countries and cultures than using imperialistic measures as we have done since the Monroe Doctrine was formulated.  The Wiki rule that that site required collaborative efforts rather than the majority rules principal.
----


We need to establish our goal of what our participation on the Internet is. If we continue to insist only our way is the right way,God's way then we will never reach and be respected by other cultures. Is it not better to compromise than bully others to follow our culture, rules and principals such as free speech?


[[User:Rich|Rich]] 11:39, 4 March 2013 (EST)
Observing the behavior of current providers and government leaders positions about the content of information, I see that there are not, and probably never will be, absolutely effective legal or technological mechanisms to control content on the Internet. If the issue were simple , all undesirable  socially behaviors that occur in network - the dissemination of child pornography , intellectual property infringement , manifestations of racial hatred, and many others - would be ceased a long time ago.
I agree that technological control mechanisms achieved by providers, for example, the one which can do a simply edit an information available on web site in order to remove or correct any references that cause damage; may also erase the contents of a given page or even remove files from the server that you use to store your information . Being a common and effective means of control, once the content provider is one who exercises direct control over the information or files available on the respective web site or server and may take steps to remove or block access to infringing material . Therefore , it is up to the judge to determine the adoption of reasonable technical mechanisms together with all other support measures that may be useful to obtain specific performance or equivalent practical result .
The implementation of drastic measures to control content on the Internet should be reserved for extreme cases, when this obvious public interest provided that the weighted potential damage caused to third parties, should not be adopted in other cases , especially when dealing with individual interest except in very exceptional situations, which represent rare exceptions .
The difficulties inherent in the protection of rights within the Internet can cause some perplexity . However , it is a reminder : the network is a reflection of society and , as such , imperfect and subject to injustice . If until today was not possible tutelary with absolute perfection all rights provided in a legal system, we would be innocent to expect different results us internet related conflicts .
The documentary about Mark Zuckerberg describes about the challenges faced by faced by Facebook regarding the control of content. http://youtu.be/5WiDIhIkPoM [[User:Gisellebatista|Gisellebatista]]




----




Professor Sellars,
I feel that your "Structural Weaknesses" piece adequately addressed many key issues surrounding internet censorship of speech, especially the fact that extensive private regulation already happens among several different parties. I also especially liked your astute observation that the tragic Benghazi situation was far more nuanced than simply one person posting a video to YouTube; there were many pre-existing societal issues at play. I do have a one question about the piece, though: When writing about how the White House requested that YouTube remove the video, you opine that the White House did so "very inappropriately." Are you saying that the manner in which the White House made the request was inappropriate, or was it inappropriate for the White House to make such a request at all? I'm genuinely curious to know what you think, seeing as how this request seems to involve the "bully pulpit" aspect of the President's executive branch, which in this case uses speech in order to regulate speech. [[User:Vance.puchalski|Vance.puchalski]] 15:19, 18 February 2014 (EST)


TAG: Student ID#10789842


The discussion on Why, How, and Who was insightful. It made me to examine deeper into the concept of online behavioral intent on both a micro and macro level. This specific space (Online) when examined, allows you to weigh both sides of the coin. In one argument censorship or content, which controls this behavior shapes our participation in the internet. On the other side of the coin freedom of speech. Politically more and more countries have taken the position to restrict and control the internet through designed "Nation Boundaries" as mentioned in class.
:Thanks for reading, Vance! My view on this fluctuate a bit, but I tend to be very concerned with government engaging in censorship through "soft power" means like this - asking YouTube to rethink a decision, cutting off payment providers, etc. - when they lack the constitutional authority to punish or enjoin it directly. I would be less concerned if they simply exercised their speech to say they disagree with the video, and maybe even YouTube's decision to keep it up. But to exercise pressure on a domestic intermediary crosses a line for me. For more on this, check out Jack Balkin's writing on [http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2377526 "old school" vs. "new school" speech regulation.] [[User:Andy|Andy]] 16:35, 18 February 2014 (EST)


In the readings concerning the laws of defamation and the restriction of content on the internet, it appears to be flawed. Depending on your country of jurisdiction the interpretation of the laws of defamation or control are interpreted differently. In a global information world which we are all a part of, restrictions are becoming tighter and tighter. An example is France restricted Yahoo to having Nazi memorabilia online. Another way to review this precedent set by the French government, is what if a corporation made tremendous acquisitions? If fundamental islamic fanatic group was to acquire Google, Bing, or both, it could become a paradigm shift in controlling the internet from an acquisition stand pointThe article "Funding Evil" extends this point by exam terrorist groups that may try to use these resources to distribute their messages of hate.   
: Vance, I had wanted to ask the same question!  And I like Dr. Seller's respond.  But in Singapore, any negative implication about the government cannot be tolerated.  Politicians in the past who speak negatively and aggressively about the government and questioning the integrity of the government, particularly of the Prime Minister or Deputy Prime Minister without clear evidence, were usually put in jail or made bankrupt as in the case of J.B. Jeyaratnam. That is why Singaporeans prefer to express their views anonymously and in the social media, but even then it could be a dangerous thing to do as it may affect their career, their family, their life.  The majority would prefer to remain quiet or share their opinions only amongst very close friends and relatives.  I used to have lots of respect for LKY but his treatment of JB Jeyaratnam seemed overly harsh and unnecessry, though this pales in comparison with wicked governments/leaders who execute their oppositions.  Yet I wonder if the opposition could have been more tactful in their approachIn some countries it may seem an acceptable way to attempt to remove a leader by smearing his characterIn another, the leader expects to be treated like a god and feels threatened when his character and integrity is called into question.    [[User:Ichua|Ichua]] 20:15, 18 February 2014 (EST)


The readings and the discussion in the class this week was very interesting and I appreciate it.
----


Have a nice week. [[User:Interestingcomments|Interestingcomments]] 17:06, 22 February 2013 (EST)


*****
Re: Internet censorship, first video: I was particularly surprised that Google provides (or provided) near realtime indicators of takedown notices and censorship, country to country. It would be interesting to look into potential backlash to this, on a country-by-country basis, on the corporate or governmental level. (Though, perhaps Google has grown so large that it has cultivated a bit of immunity?) That being said, later in the video it did mention that corporations (theoretically even Google) are on “their turf” and have no choice but to comply.


This week there were fabulous document that discussed internet security. Programs such as our adobe can at times be considered to indexed. Here the issue is whether hyperbole or preposition is defined or at least contextualized. Bearing with the notion that "more is better" I shall continue typing: Here the reader shall notice that a program as it may be quoted is merely a type of application that is for sanitary use. Therefore, the internet decidedly is not the same as a program and the program is not necessarily reliant on the internet per se. In conclusion to this stream of logic the type of compartmentalized information is not the actual understanding that something can be a mobile storehouse of information without affirmative action. Since computers are only affirmative in their position there leaves the possibil;ity for autocorrecting such documents and this very notion is the idea here of internet security. [[User:Johnathan Merkwan|Johnathan Merkwan]] 12:51, 26 February 2013 (EST)
A theme that keeps coming up in these readings, and in class, is that our perception of our freedoms on internet seem to be skewed. It’s almost inherent in our dealings with the net. Why do we, generally speaking, have this idealized view? An example could be user-created content — websites, commenting, user-focused platforms, etc. etc. In general, user driven content… the fact that anyone can theoretically add to the web space, with relatively low visibility. This could be leaving us with this idea that the web is truly open. The wide availability of pornography actually comes to mind as a decent example… In that, if this real-world, regulated material is so widely available online, then the net must be a “free” space.  


*****
Re: Dangerous speech vs. hate speech: While watching this video it occurred to me — when she was speaking about not needing to limit the hate speech itself — that the internet provides people with such wide access to information across the globe, so that this hate speech could be accessible in a volatile area, thus making it also dangerous speech. She didn’t mention that fact, but perhaps I missed it in another reading…. It strikes me that it would be hard to define this based on territory and context, given widespread access to the web. [[User:Twood|Twood]] 15:25, 18 February 2014 (EST)


Reading the Wikipedia article on Section 230 of the Communications Decency Act prompted me to think about cyber-security and where liability should fall if private information was hacked and released and later deemed as defamatory. Earlier in the course I posted a Gizmodo article on Apple’s iMessage server being hacked and I am beginning to wonder if, as online hubs of information grow to become more institutional, individuals who are targets of cyber attacks will start blaming the companies that have their private information stored on their servers. I foresee that as the internet and cloud take on greater roles in the institutionalization of business and everyday life, that issues like these will start arising. As to the Ars Technica interview, I think that scaling back protection for service providers in order to protect children presents a weak claim. I think doing so would severely alter the costs of operating a site like Myspace or Facebook. Also, it is my personal belief that a parent should be responsible for teaching their children how to correctly use the web and exercise safe practices online. I feel that extending liability and limiting free speech poses a great danger at a minor benefit of altering section 230 for a very specific purpose. [[User:AaronEttl|AaronEttl]] 15:37, 23 February 2013 (EST)
----


*****
I have to mention that the most interesting reading material was "Delete Squads". I am sharing the views of Deciders on ground that


I was very interested to read about Section 230 of the Communications Decency Act. I very much support the idea that the internet service provider should not be held liable for individual's misuse of their publishing service, where the company itself does not exersize creative control over the content (i.e. deleting a post is not a basis for liability). Although I am not an expert, this regime would seem to be different from Canada where we have witnessed very prominent internet communities being brought to their knees by posts made by one or two offenders. My proposed research article, on lawbuzz.ca, will be very much concerned with ideas contained in the s. 230 of the CDA and I looking forward to comparing your legal regime with Canada's.
[[User:Joshywonder|Joshywonder]] 21:35, 25 February 2013 (EST)


*****
----


In was quite an interesting read about Section 230 for this class, however in my view, suing diverse websites for such defamatory content is virtually impossible. Since the information is distributed, diverse websites seem to be immune from liability and responsibility to screen or remove the offending content. Who is responsible then? While a clear understanding of Section 230 is present, its critics are more valuable and portray a completely new approach when dealing with Internet content facilitators. In my view, interpretation of Section 230, can grant an immense immunity to companies or website that provide service or content (inappropriate) to realm the defamation factor altogether. What about speech? In my view, defamation is not only the single area of law, but it could be found as a refuge under Section 230 immunity. A section 230 has been interpreted so broadly in my view, that it has protected diverse types of speech, which was intended to prevent from precise inappropriate behavior. [[User:User777|user777]] 11:12, 26 February 2013 (EST)
Free speech and the Internet have been intermingled in speech from the very beginnings of online interaction. But as the Internet has developed , so have people’s opinions about what rules and regulations apply from the “real world”.


**
Major corporations policies and outlooks (like Techs from silicon valley) on free speech have shaped how people post and express themselves over the web.
But major issues have arisen from foreign countries differing like France demanding Twitter to hand over the identities of users who promoted hate speech. Even Google censors of a video, even though it is only in a few particular countries, caused much concern or like when Google image search for Tiananmen Square shows starkly different results from a search in the US or a search from China.
China, Russia and other totalitarian countries have differing ways to effectively filter free speech on the Internet. These range in form but generally are DDoS attacks, hacking, intrusion filtering by key words, or flooding blogs with pro-government agenda even shutting down their internet for a time.
What is the best way to keep speech on the Internet censorship free? There are many answers. One is way is circumventing tools, many have been developed but they have proved of little use in the over all struggle.
It seems the best way is to have the giants of the Internet have a greater role in participation paving the way for generations to come.


The readings on Section 230 reminded me of Zittrain's argument that it was not inevitable that the internet world would turn out the way it is today. I think Section 230 and the way the courts have broadly interpreted it to protect internet service providers and social website companies has been, and still is, key to the success of the internet and maintaining that spirit of freedom the internet is known for. I agree with AaronEttl that Section 230's protections shouldn't be narrowed, especially if the reason is to increase online safety for young users. In addition to traditional education ("don't talk to strangers" adjusted and applied online), there are also plenty of parental filtering tools that can be used by individuals themselves. ISPs and websites that are not content publishers should not be held responsible for the actions of internet users, although they certainly can act according to their own company policies, which in turn would be determined by the company's principles, values, and target audience/market. It is not the shopping mall owner's fault if a person walks in and suddenly strips naked or does something worse. --[[User:Muromi|Muromi]] 11:43, 26 February 2013 (EST)
The discussion between John Palfrey and Adam Thierer about 47 U.S.C. Section 230 addresses some of the most contentious aspects of law in such a clear concise way. Should online carriers be liable for content posted on their websites or web browsers? Each provides compelling reasons for their view. Adam Thierer argues in favor of keeping Section 230 intact showing how crucial it has been to the development of the Internet. Although John Palfrey agrees, his view is to make stricter demands on ISPs and “interactive computer service providers”. Through out their discourse they touch upon tough issues like negligence claims, increased government involvement, litigation, and child obscenity laws. [[User:Emmanuelsurillo|Emmanuelsurillo]] 15:57, 18 February 2014 (EST)


*****


I concur with Joshywonder’s statement above:  ''I support the idea that the Internet service provider should not be held liable for individual's misuse of their publishing service.''  However, in some circumstances, I think the Internet provider should play a mitigation role.  While reading about defamation this week, online bullying emerged in my mind.  Social networking has changed the landscape with bullying, especially for children and adolescents.  Mass media presents a series of legal issues, as outlined on the Citizen Media Law site, but defamation via media has always been present.  Online confrontation among elementary and high school students highlights a distinct reality.  An adult may take action if one’s name is tarnished, but youth will rarely take bullying to the legal arena.  That said, it would be interesting to read more case studies surrounding the actions service providers have taken, when youth defamation scenarios have surfaced.  In our readings this week, the examples were excellent, but I’d also be interested in learning about specific cases against Facebook, or Twitter, or Orkut.  Furthermore, it would be interesting to investigate reactions from other countries.  For example, do other governments/courts of law acknowledge online youth bullying, and if so, is action taken?  Or, is online bullying more prevalent in the U.S. due to cultural/environmental factors?
----


It has become apparent in the first month of this course that online “freedom of speech” is a complex topic.  I very much enjoyed the discussion between John Palfrey and Adam Thierer, because both arguments shed light on valid points.  Thierer: ''“What I worry about, is that a new liability standard might not leave sufficient room for flexibility or experimentation.  If Congress altered Section 230 (or the courts tipped the balance) such that negligence claims could be brought too easily, I think that could have a chilling effect on a great deal of legitimate online speech, especially for many smaller social networking sites and up-and-coming operators.”''  This argument is in-line with our readings from two weeks ago: More Confusion about Internet “Freedom.”  Suppressing freedom of speech can, in some circumstances, cause more damage than good.  If inhibited, our founding principles may not be upheld; but at the same time, there is an enormous unknown gray area between right and wrong communication practices.


Palfrey’s response in reference to this statement is also worth noting: ''“My proposal would be to leave the question of negligence on the part of service providers….[W]e need a range of community-based solutions that put parents, teachers, coaches, mentors, kids themselves, law enforcement, social workers, technologists and online service providers to work.''  In other words, the battle to uphold decency cannot be done alone.  Society at-large must step up to the plate as online communication evolves. It begins with new policies and trickles-down through law enforcement officials, community leaders, and parents, ultimately impacting the instigators.  Although ''“no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,”'' as outlined in the Wikipedia post (Section 230 of the Communications Decency Act), we all play a role to protect the vulnerable.  Many youth who are bullied have no options available, no protection, and no escape.  Given recent tragedies surrounding youth defamation, from my perspective, extreme circumstances should be analyzed on a case-by-case basis and pursued as appropriate. [[User:Zak Paster|Zak Paster]] 12:05, 26 February 2013 (EST)
Per Zuckerman's article, it is my conviction that "Internet Censorship" in many countries sometimes reflect the level of such countries' tolerance for "Freedom speech." The article gave the examples of China, Saudi Arabia, Tunisia, and Zimbabwe, just to mention but a few. These countries' censorship of certain websites, blogs, "sensitive keywords," or the sharing of specific information are motivated by various reasons. However, the restrictions to their state-sponsored ISPs have also forced web hosting services such as BlueHost as well as American internet giants such as LinkedIn, Google, Microsoft, and others to comply with the governments of those countries in order to do business there.  


******
Personally, as much as I agree with Zuckerman's analysis on certain governments' censorship of the internet in relation to freedom of speech, I also believe that the financial motivation is equally important to some of them. For instance, I recently traveled to the UAE and Kuwait for business. One of the most noticeable internet censorship that struck me the most was how both countries blocked messaging applications such as Skype and Viber. I noticed that one cannot download those apps while in those two countries. It's either that you download Skype and Viber from the U.S., before you leave, or you will not be able to do so once there. Is this censorship to restrict "freedom of speech" or to protect their local telecommunications companies?
[[User:cheikhmbacke|cheikhmbacke]] 15:55, 18 February 2014 (EST)


@Zak, I agree with your general stance on ISP immunity from individual expression, but, like Adam Thierer from the reading, I’m not as clear on the extreme circumstances that would warrant an information provider to intervene. Let’s take your example of online bullying. Online bullying does not happen in a vacuum; it usually happens as an extension of real-life interactions. This creates a number of concerns. First, how will an information provider recognize bullying? The danger is that this could be broadly interpreted. It may not be bullying at all, but an inside joke or nod to some real life context. It would be impossible to understand the entirety of the context offline. Second, if content is taken down or mitigated, this could make it seem as though the victim needed to seek out help. That might make the victim feel more weak and disempowered and it might exacerbate the bullying in real life. So I guess my question is, when it comes to something like online bullying, how do we make it so that the victim still has control over the situation and not make it seem like some third party came in to resolve the issue (because the victim tattletaled) ? I agree society at large needs to protect the vulnerable, especially in public forums. And I agree that we as a society must manage online bullying, we just have to be careful that the solution is not just “this is bullying, remove the content.” [[User:Asmith|Asmith]] 15:48, 26 February 2013 (EST)


*******
----


I was interested in the issues raised by Funding Evil and impressed by the speed in which several states and then Congress responded to the verdict against the author in the UK. I did some follow up research on the topic, and could find one of the four other texts that were taken out of publication available on line, and was interested in how the libraries who had purchased one of the earlier texts (Alms for Jihad) responded to the publisher's request to destroy the book. (They didn't. And then several moved the books to reserve or hold desks so that they could make sure the book was not stolen or destroyed.) I checked the mainstream newspapers of the US and could find no review of the book Funding Evil, either before or after the court case in the UK, and it had me wonder if the book has gotten significantly more readers and attention from the trial and the publicity surrounding the trial then it ever would have received without it. Alms for Jihad, did not receive the attention or the increase in sales (it is, I think, out of print, and only available at libraries who have not destroyed it and also online as a pdf.)
What can and cannot be written on the internet, who gets to decide, and who is liable if someone makes the wrong decision looks to be something we will be discussing as a society for quite a while. Bullying, lies, false information are all easier on the internet, but also more public, easier to find out and if you are an adult, easier to counter by publishing your own response.
[[User:Raven|Raven]] 16:49, 26 February 2013 (EST)
****
The American people are, as a rule, hostile towards the dubious authority of international law; in instances in which foreign actors initiate civil proceedings against Americans in foreign countries, no American stands to benefit should the American state comply with the demands of an alien court. While couched in a simpering context of self-righteous "principle," the SPEECH Act correctly secures the prerogatives of Americans to say and do as they please, irrespective of the preferences of those whose interests the American government does not exist to represent. Although, perhaps understandably, the Act concedes the authority of foreign states to punish expression given the application of defamation laws analogous to our own, the spirit of the SPEECH Act defends those whose interests its purpose it to defend- the American people- instead of confusing the interests of the world community with the interests of a state and its citizens, which is what opponents of legislation like the SPEECH Act insist that states and their citizens do.
[[User:Johnfloyd6675|Johnfloyd6675]] 17:19, 26 February 2013 (EST)


****
I have to mention that the most interesting reading material was "Delete Squads". I am sharing the views of Deciders on ground that the internet is the independent space of sharing thoughts, information, data and etc. which is the legal base of freedom of speech. But, if we imagine for a moment, that our future will be considerably affected by Internet, so we should be more careful. After reading all relevant materials, I came to the following conclusion on this topic:
- The Internet must totally be the independent space and everyone will be able to exercise his freedom of speech without any restrictions other than those set by legislation.
- The posted content which breaks the requirement of legislation should be withdrawn/deleted immediately by the website after they become aware of this fact. Otherwise, the role of legislation may be undermined.
- If the posted content breaks the legislation of certain country, the access of this particular country must be restricted to this content upon relevant request.
- The user who had posted the content which is incompliance with legislation, must be held liable. The websites and such companies as Google, Twitter and Facebook etc. can held liable only in cases if they knew about this violance, but failed to take appropriate measures (for example, other users informed the website about breach,
but the website didn't delete it). So, in this case the websites cannot be held liable for any breach of legislation by users, if they prove that they didn't know this fact. Otherwise, the websites will be "gatekeepers" of content in the future in order to avoid liability.


The Communications Decency Act provides legal immunity from liability for internet service providers who publish information provided by others. In my opinion, this is a common sense law which is just - you shouldn't attempt to hold an ISP legally liable for what their customers post online - as well as practical, as holding an ISP accountable would give them significant responsibility to screen and monitor all their customers' Internet activity, which not only would would be unusually burdensome but also would possibly infringe on some of their customers' privacy rights. Online service providers should be no more liable than any similar physical-based offering. [[User:CyberRalph|CyberRalph]] 17:33, 26 February 2013 (EST)
Consequently, I think that there should be compromise between freedom of speech and restrictions on it which are set by legislation. ([[User:Aysel|Aysel]] 15:56, 18 February 2014 (EST)) Aysel Ibayeva


****


Just a quick link to share. This is a great and simple visualization of net neutrality: http://www.theopeninter.net/ [[User:Asmith|Asmith]] 22:46, 26 February 2013 (EST)
-----


****


Having been a criminal (and thus also a Constitutional) trail and appellate lawyer for decades I have always had a profound interest in the First Amendment. It is one of the most misunderstood of all of our 'inalienable rights.' The Bill of Rights did not even apply to the states until almost 80 years after the Constitution was amended to include the first 8 amendments with the enactment of the 14th Amendment. But, even today most believe that Freedom of Speech is a protected right in all media, when in fact the 1st Amendment clearly states that "Congress shall pass no law ...". It does not apply to private entitles and individuals that restrict free speech.  Name it, Facebook, You Tube, Twitter, Wikipedia all can restrict free speech. Wiki claims that it does not censure content, but contradicts itself by admitting it is not a forum for unlimited free speech. 
Herdict is a very useful tool, not only for filtering and censorship but in general when troubleshooting the reason a site and/or one of it's URL's isn't available. I was wondering why I hadn't heard of this tool or something similar before. Kudos to Jonathan Zittrain! I was also surprised to find out that the U.S. ranked third on the list of censorship by country, since we are a relatively free and democratic nation.[[User:404consultant|404consultant]] 16:26, 18 February 2014 (EST)


I retired from private practice just as the Internet was becoming popular in the mid-1990s so I never had the opportunity to defend anyone charged with downloading child or other pornography.  I wish I had had that chance because I find it shocking that an individual can be criminally prosecuted for simply downloading material that is allowed to exist on the Internet. I can think of no greater limitation on free speech. In fact this is a restriction on free observation; a control on what we see and think.  Of course I find child pornography nothing short of revolting.  But I also find a lot of free expression revolting. I find rap and hip hop music tasteless if not outright immoral, but it is legal and I support freedom of expression and speech. We cannot legislate class and morals, so the choice is free speech without boundaries or restricted speech (there are obviously many utterances that cannot be tolerated ... the classic example I learned many years ago in my first year of law school when I took Constitutional and Criminal Law ... you cannot yell "fire" in a crowded theater.  There are many instances where the public good must trump individual rights, but to extend that to downloading what one can observe is in my not-so-humble opinion going way too far.


What is "decent?"  Is it in the eye of the beholder. In the Muslim word it is indecent for a woman to expose her head and face. 100 years ago it was indecent for a woman to expose her legs at a public beach.
----


I believe that there must be some restrictions when it comes to national security, fighting words defamation, compelling and government reasons. But I simply cannot accept a restriction on observations.  Before one can be criminally, or even civilly liable for an illegal conspiracy, there must be an overt act ... so kind of participation, albeit a very low bar. But here we can be sent to prison for a very long time for simply observing pornography sent over the airwaves. Tain't fair or just.


[[User:Rich|Rich]] 10:07, 27 February 2013 (EST)
I see the decision of an online service provider to take down content as a cost-benefit one. If a request to remove material could potentially represent legal action with potentially large monetary losses then the OSP incentive is to remove material with little hesitation. On the other hand, there might be situations were free speech is awarded by users and thus represent a benefit for the OSP. Either way OSP’s are in an extremely tight situation with tradeoffs having important implications in terms of free speech.


I would recommend reading a couple of books I am sure most already know about, Networks and States, The Global Politics of Internet Governance by Milton L. Muller (MIT, 2010 and Freedom And Cyberpunks: The Future Of The Internet by Julian Assange (OR Books, 2012).
[[User:Luciagamboaso|Luciagamboaso]] 15:56, 18 February 2014 (EST)


I had heard of WikiLeaks but not Mr. Assange until he became an International criminal recently when he truly upset our government by leaking top secret information.  At first I was outraged at his conduct and believed he was definitely a cyber terrorist, but upon reading his book which I am continuing to do my views are beginning to change.  Back in college (my first go-round in California during the Vietnam war I found myself in a unique position as a person who had many ultra liberal views, but considered myself a super patriot as well. In a piece I wrote for my college student newspaper supporting the presidency, but not necessarily President Johnson on the way the war was being handled I was dismayed when several letters to the editor referred to me as a "Conservative." After all, I campaigned strongly for Johnson in 1964 (Yes, classmates I may be the oldest Ivy Leaguer, ha, ha) and against Goldwater. But that did not mean I supported his every policy. I did strongly support his domestic policies.
----


The Vietnam War and the Watergate scandal wised me up that our politicians might be as ruthless as many dictators but the difference could be that while they do not overly murder their opponents as a Saddam Yussein or Joseph Stalin did, but perhaps only because some who would if they could get away with it can't. It is naive to believe that our government leaders are immune from the evilness that abounds in closed societies. We are not a nation of natural born saints.  I will not name names but some of our recently leaders have that capacity if they could get away with it.


The so-called "Decency Act" is an amendment to the Comstock Act. If any of you are unfamiliar with the anti-obscenity crusader Anthony Comstock after who the act is named, let me familiar you. He was a fanatic whose famous line when asked to define "obscenity" stated he could not define it, but "I know it when I see it."  Not too supportive of the spirit of Due Process and the 14th Amendment is such rhetoric is it?
Zuckerman’s article brings interesting light to censorship in regards to freedom of speech and who’s value should take precedence.  
As discussed in the article, Bluehost’s CEO, Matt Heaton was quick to back the companies financial well-being by adding an entirely new section (13) to an already executed contract and cease service of Burrell’s sites to "comply" with the U.S. Treasury.  


In my first year in law school I took a Legal Methods class and did a term paper on pornography and obscenity which is when I first learned of Mr. Comstock. I was Editor-in-Chief of The Appeal, our school newspaper and my front page article on bottomless dancers resulted in the owners of this definitely "for profit" grade Z law school shutting it down the newspaper for a year. So much for freedom of the press and freedom of speech.
Heaton’s decision was most likely based from a financial perspective, and was unwilling to take a chance due to the slight profit margins of hosting. So, to safeguard profits, the company added section 13 to a pre-existing contract.  In this case, Bluehost value took precedence (Not the U.S. Treasury or the sites Burrell happened to be hosting through Bluehost).


Since then I realized that allowing or restricting freedom of speech is a two-edged sword. Where it stops is often, very often subjective and what is allowed in one era, in one culture, in one country is not allowed in another.  Forty years later in many respects we have more censorship today than we did back then.  So, I believe we must be very careful how tolerant we are of those who are intolerant. 
Question: Does Burrell have the ability to pursue legal action against Bluehost for adding a clause to her contract after the original agreement was executed?


I do not necessarily subscribe to the Cyberpunk "science" or philosophy chapter and verse, but I see the attempts to squelch free speech on the Internet as a major threat to our very foundation as a democracy.
--[[User:Melissaluke|Melissaluke]] 14:39, 18 February 2014 (EST)


As a trial lawyer I have prosecuted and defended libel and slander cases, so I have fought for both sides in many cases. I realize that defamation can be almost deadly, but again this is a matter of the specifics on a case-by-case basis.  Any restriction of free speech must be very carefully weighed and I believe when in doubt allow it.


[[User:Rich|Rich]] 16:34, 27 February 2013 (EST)
Freedom of speech should be preserved online. However, I do agree that hate speech especially one that poses an immediate threat to someone’s safety should be interfered. Reporting of such occasions and policing should be in place to protect people’s safety, however, the problem is who can be trusted with such an enormous task. Government might not be appropriate because it may easily lead too much politically-influenced filtering. It’s an interesting topic and hopefully we can find a way to affectively share ideas but also staying within some sort of safety boundary.
[[User:Lpereira|Lpereira]] 15:34, 18 February 2014 (EST)
----

Latest revision as of 18:05, 22 February 2014

February 18

The Internet has the potential to revolutionize public discourse. Instead of large media companies and corporate advertisers controlling the channels of speech, anyone with an Internet connection can, in the words of the Supreme Court, “become a town crier with a voice that resonates farther than it could from any soapbox.” (Reno v. ACLU). Internet speakers can reach vast audiences of readers, viewers, researchers, and buyers that stretch across real space borders, or they can concentrate on niche audiences that share a common interest or geographical location. What's more, speech on the Internet has truly become a conversation, with different voices and viewpoints mingling together to create a single "work."

With this great potential, however, comes new questions. What happens when anyone can publish to a global audience with virtually no oversight? How can a society balance the rights of speakers with the interests in safeguarding minors from offensive content? When different countries take different approaches on speech, whose values should take precedence? When a user of a website says something defamatory, when should we punish the user and when should we punish the website?

In this class, we will look at how law and social norms are struggling to adapt to this new electronic terrain.

Joining us this week will be Jeff Hermes, Director of the Digital Media Law Project.


Assignments

The first half of assignment 2 (posting your prospectus) is due before class next week (Feb. 25th). Information on the assignment can be found here.


Readings

Private and public control of speech online
Speech laws and liabilities in the United States
Cross-border concerns

Optional Readings


Links from Class Discussion

Jeff Hermes' bio: http://cyber.law.harvard.edu/people/jhermes

"Incorporation" of the First Amendment against the states: https://en.wikipedia.org/wiki/Incorporation_of_the_Bill_of_Rights

Jacobellis v. Ohio ("I know it when I see it"): https://en.wikipedia.org/wiki/Jacobellis_v._Ohio

Miller v. California (True obscenity standard): https://en.wikipedia.org/wiki/Miller_v._California

Hustler v. Falwell: https://en.wikipedia.org/wiki/Hustler_Magazine_v._Falwell

Analysis of Federal Restricted Buildings Act: http://www.snopes.com/politics/crime/restricted.asp

NYT v. Sullivan: https://en.wikipedia.org/wiki/New_York_Times_Co._v._Sullivan

Gertz v. Robert Welch, Inc: https://en.wikipedia.org/wiki/Gertz_v._Robert_Welch,_Inc.

US v. Alvarez: https://en.wikipedia.org/wiki/United_States_v._Alvarez

Society of Professional Journalists' Code of Ethics: http://www.spj.org/ethicscode.asp

Recent Microsoft issues with search censorship: http://www.herdict.org/blog/2014/02/13/bing-needs-to-explain-its-search-algorithms/

Duties of Chief Justice of the US Supreme Court: http://en.wikipedia.org/wiki/Chief_Justice_of_the_United_States#Duties

Litigation involving Wikimedia Foundation: http://en.wikipedia.org/wiki/Litigation_involving_the_Wikimedia_Foundation

John Seigenthaler Wikipedia Biography Controversy: http://en.wikipedia.org/wiki/Wikipedia_Seigenthaler_biography_incident

Background on the Innocence of Muslims video: https://en.wikipedia.org/wiki/Innocence_of_muslims

Example of effect of YouTube videos on banks: http://www.theregister.co.uk/2013/01/09/us_banks_ddos_blamed_on_iran

CDA Section 230: https://en.wikipedia.org/wiki/Section_230_of_the_Communications_Decency_Act

Reno v. ACLU: https://en.wikipedia.org/wiki/Reno_v._American_Civil_Liberties_Union

David Ardia did a pretty thorough review of where we are with CDA 230, about 10-15 years later: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1625820

Facebook's content review guidelines: http://gawker.com/5885714/inside-facebooks-outsourced-anti+porn-and-gore-brigade-where-camel-toes-are-more-offensive-than-crushed-heads

Facebook terms of service: https://www.facebook.com/terms.php

Twitter ToS: https://twitter.com/tos

State Attorneys General Letter: https://www.eff.org/sites/default/files/cda-ag-letter.pdf

Class Discussion

REMINDER
Your comments must be submitted before 4:00PM ET on the Tuesday we hold class in order to count for participation credit. Please see the participation policy for more information.


Please remember to sign your postings by adding four tildes (~~~~) to the end of your contribution. This will automatically add your username and the date/time of your post, like so: Andy 15:12, 7 November 2013 (EST)




"The spread of information networks (the internet) is forming a new nervous system for our planet" - Hilary Clinton.

See: https://www.youtube.com/watch?v=ccGzOJHE1rw

For governments to react expeditiously to help individuals or communities in distress, there must be freedom of speech online. But for this to be effective, the process need to be organized and formalized. Individuals need to ensure they are not sending noises and gibberish but useful information so that either the government or other able individuals, NGO's, or even private corporations can come to the rescue.

Ichua 06:57, 12 February 2014 (EST)




I have to say, I found "The Delete Squad" article by Jeffrey Rosen to be extremely interesting. While I find hate speech despicable, I agree with the conclusion at which "The Deciders" arrived, to intervene only in rare cases in which resulting violence appeared imminent. In this age of prolific internet bullying, I can see how many people (particularly parents) might be inclined to argue that regulations must be implemented, but to me the solution seems to lie more so in the individual's own usage of the internet. By this I mean to say that a person should be responsible for restricting his or her (or his or her child's) internet usage so that he or she is not actively involved in sites which might be problematic. Castille 02:26, 15 February 2014 (EST)


Rosen's article sheds a lot of light on what has become very important content control force in digitally-mediated discussions. For me, the most interesting and troubling aspect of this is the time they take to decide these things. Rosen claims the content review groups at Facebook have on average 20 seconds to evaluate a claim before acting upon it. It is nearly impossible to internalize in such a short period of time the complicated elements Susan Benesch flags to separate the dangerous from the tasteless but far less dangerous - the context, the speaker, the audience, etc. How can they be expected to do in 20 seconds what scholars and courts spend years (and many trees of paper) contemplating in other contexts? (Oh, and to your next post - book recommendations are always welcome!) Andy 21:40, 17 February 2014 (EST)
Yeah, and it also seems unlikely that they have an entire team of lawyers (or other equally "qualified" professionals) working on every single claim. I wonder-- and perhaps I just missed it, if it's mentioned-- whether they favor a more lenient or strict position, on average. It seems "easier"/"safer" to delete anything that becomes an issue and deal with it again if and only if the deletion is contested. I know I've had several friends' profiles, including all of their photos, comments, etc. over the years, deleted and content removed without so much as a notice.




This might be a little off-topic, so I apologize in advance if it's "inappropriate", but I was wondering if anyone has read The Circle by Dave Eggers? These readings-- and my exchange with Ichua on last week's discussion board-- have really made me consider the thoughts posed in that book. Basically, the book is about a company (a la Facebook) which seeks to "complete the circle" of internet usage and identity. It functions as a sort of government in and of itself, as well as a full-fledged community/world. Everything is consolidated on their system, so that people have basically no anonymity online as we do now; the internet is no longer removed from reality, but is instead a virtual reality in the most literal sense. All of their information is stored within the system, including their medical records, family history, purchase history, job details and tasks, and essentially all communication is conducted through the site. There is also a security camera system which is set up and controlled by the users, but has become so prolific that essentially every area of the globe is under surveillance. While the situation posed in the novel is drastic and even scary, there are a lot of positives to certain aspects. I think the biggest concern is not necessarily the loss of privacy, but the question of who controls (or should control) such a system. Certainly controls should exist, but surely corporations should not have that much power or intimate knowledge and it seems that even a government would not suffice for such a job. Should there be another authority? If so, what sort of entity would be qualified to do such a job? I'd love to hear other peoples' perspectives, whether you've read it or not.Castille 12:55, 17 February 2014 (EST)




  • NOTE 1 While reading this week's articles, I took a break from homework to scroll down my Facebook newsfeed. I came across a post by a friend in Quebec, about a website that satirizes Snapchat. When I clicked the link, it gave me an error message. I messaged my friend, she was able to open the link with no problem from Quebec. From the comments on her post, it seems as though the only questionable content were some dirty pictures on the site, but nothing I understand to be limited in the USA. That was a bit weird/scary...
  • NOTE 2 Now that I am done reading this week's articles, I am more nervous to post my honest response to some of the articles than I used to be!
  • QUESTION Does anyone know the Wiki Markdown version of target="_blank"? I'd be happy to add the markup to the class readings if anyone knows what the code is (I've tried Googling it... no luck...)

Erin Saucke-Lacelle 15:27, 17 February 2014 (EST)

It is generally considered bad practice in web development to use target="_blank" outside of very specific, exceptional cases. The reason is simple: If the link has no target attribute, the behaviour is defined by user's settings and by user's action as they can either click the link or right click and open in another tab/window/etc., some browsers offering other options such us click&drag, middle click, etc. If the link has a target="_blank" attribute, on the other hand, the user is forced to open the link in a separate tab/window - his actions are thus limited by the developer, for no good reason (even if the developer might think he has a good reason, it usually isn't). --Seifip 17:39, 17 February 2014 (EST)
Thank you for the note Seifip!!! Makes sense, maybe i can play around with Chrome settings & see if I can set it so outside links always open in new tab... Not that I'm too lazy to press the cmd/ctrl key for each link... (well I guess a bit) but my keyboards are all in different languages which confuses the crap out of my typing muscle memory, so I love it when browsers already know which links I want in a new tab (:
Linkclump extension is your friend :) --Seifip 07:58, 18 February 2014 (EST)




As I was considering the intersections of this week’s readings, several articles reminded me of a case that occurred back in 2000, although not within the realm of the Internet or something like the Flickr or Picasa most of us are very familiar with today, the parallels and concerns will seem obvious.

When we think about the amount of daily photographic content that now goes up on Facebook, Flickr, Picasa, etc. and consider the roles of these “Deciders” (as defined in one of the reading), the case as it occurred for an Oberlin, Ohio family back in 2000, seems like it could play out over and over again if individual states received the powers of prosecution to the extent that the State Attorneys General are requesting in their letter to congress on July 23, 2013.

Some may remember the case I’m referring to, in an overly distilled summary, it involved an amateur photographer who was chronicling her daughter’s life in still photography. Some photographs included her (then 8yr old) daughter bathing. When the photos were developed by the local film-processing lab, a clerk reported this to the police as an incident of “child pornography”. The local police agreed, and the mother was arrested and the case garnered national attention at the time with the ACLU coming to the defense of the mother. http://www.oberlin.edu/alummag/oamcurrent/oam_spring_00/atissue.html [Later the subject of an entire book looking more closely at the issues] http://www.pbs.org/newshour/art/questions-of-photographic-propriety-in-framing-innocence/

The letter by the 49 Attorneys General certainly strikes at a horror that anyone with a human heart will become equally enraged towards - the tragedy of child abuse, sex trafficking, and exploitation. While it seems odd that the word “The State” is omitted from the current language of the CDA, I wonder if by including “The State” in CDA language, we will end up introduce a sliding scale of laws that become defined by “the standards of any small community” enforcing crimes that THEY define a “Obscenity” and/or “child pornography”.

What is viewed as unprotected speech and deemed as “obscenity” (or “child pornography”) in Lorain County Ohio, may not result in the same definition in (say) San Francisco. With the addition of “The State” in the CDA, could the State of Ohio prosecute a photographer in San Francisco for posting an “obscene” picture to a Flickr account which is accessible to users in Ohio? If the definition of “obscenity” is based on the Miller’s test (below), then What are the “community standards” that define obscenity in a case where one state wishes to prosecute someone in another “community”??

The Miller test for obscenity includes the following criteria

(1) whether ‘the average person, applying contemporary community standards’ would find that the work, ‘taken as a whole,’ appeals to ‘prurient interest’

(2) whether the work depicts or describes, in a patently offensive way, sexual conduct specifically defined by the applicable state law, and

(3) whether the work, ‘taken as a whole,’ lacks serious literary, artistic, political, or scientific value.

Psl 17:47, 17 February 2014 (EST)

Thanks for contributing! Just to clarify, the constitutional definition of actionable obscenity under Miller has the geographic element to it, which tailors the more general criminal statute, but in the realm of child pornography neither the criminal statute nor the First Amendment doctrine base liability on community standards. So while obscenity can very state to state, child pornography does not. (And both are illegal at the federal level.) Andy 18:47, 17 February 2014 (EST)




I have a greater appreciation for the issues involved in online free speech after this week's article. I somewhat disagree with Zuckerman's conclusion that private limitations to speech in private spaces is "Dangerous for a public society," in that I believe that private companies need to be able to define what is or isn't acceptable communication within their own environments--we're guests in these areas, and it's up to companies owning the spaces to decide what sort of environment their guests are going to experience. On the other hand, I don't think it can be the government that defines what's acceptable--it needs to be up to the individual owners of these spaces. I'm concerned about any encroachment on an individual or private enterprise's ability to decide what rules are appropriate for itself. While I find the content of, say, a site like Stormfront (a white separatist website) to be totally repugnant, I would defend their right to publish what they do--if anything, it simply exposes their nonsense to public scrutiny and criticism.

I am sympathetic to Benesch's thinking about "dangerous speech," and in particular it does make sense that the context (speaker, political environment, proximity to sensitive events, lack of competition/criticism) can make hate speech turn into something more insidious. Nevertheless, I'm unable to think of a good solution that doesn't actually make things worse. She claims to defend freedom of expression yet is able to make a distinction between expression and freedom of the press (dissemination). I find myself unable to disentangle the two. When one considers the international aspects, and the potential for international lawsuits (such as the French cases we've discussed) it seems like it would be unusually hard to apply her test to speech and protect the right of companies in places such as the United States to publish things that someone might claim to be "dangerous" elsewhere. For example, would the Chinese government find it to be "dangerous" if the customers of Twitter posted content about how there should be an end to single-party rule? Where do we draw the line? It's clear that not only are there the interests of certain governments at stake (and their authoritarian approaches to speech) but also the simple fact that some countries (such as the Rwanda example) may not have the institutions or cultural heritage to handle US-style free speech; yet it is it fair to force US companies to account for all of these cross-border and cross-cultural differences?

Jradoff 20:08, 17 February 2014 (EST)




I also found myself somewhat sympathetic to Bensech's concern about dangerous speech. However, it is unfair and implausible to make US companies responsible for such cross-border/cultural differences. It is bad for business and generally not a policy I would deem logical. The way I see it, should a company be held liable for slander that someone says while in their establishment or be punished for someone who spray paints a hate message on the company's door? Although businesses can take precautions to try to prevent such occurrences, to do so over the internet is a much more painstaking task. Furthermore, I think the bounds of what constitutes "hate speech" is being stretched to some degree. Constitutionally and as many Supreme court cases have favored, freedom of speech is protected so long as it does not "incite violent action". For example, to instruct people to harm someone of a certain race would be considered unlawful. In my mind, that is where the line must be drawn.

Though, as others have mentioned, internet bullying is becoming more widespread and has resulted in teen suicides and possibly contributed to the uptick in school shootings as some have theorized. Still, to what degree should we be prosecuting internet hacklers for this behavior? As Professor mentioned in class, once an incident occurs Congress tends to look for an immediate remedy via legislation when it may not necessarily be the answer. Of course I find it horrible and morally repugnant that someone would bully an innocent person online but does this mean that every bit of our speech should now be scrutinized and if we, for example, call someone fat online we should be given a misdemeanor? If our society deems legal recourse for online bullying, it will become quite convoluted in staking out the levels and appropriate punishments for each offense. Should a few "bad apples" online ruin or impede the benefits of free internet speech for the masses of good people in society who thrive off of our shared knowledge? Should McDonald's cheeseburgers be illegal to protect those who struggle from obesity? No matter how you frame it, more restrictions will eventually equate to more inhibition for companies and citizens alike. Such inhibition, I argue, thwarts a society's economic and intellectual growth.

--AmyAnn0644 10:34, 18 February 2014 (EST)

I'm really glad you brought up the issue of bullying! This is an area where the Berkman Center's Youth and Media Lab have been doing some great research around framing, understanding, and assessing efficacy of solutions to bullying. Andy 11:15, 18 February 2014 (EST)


I agree with your points, AmyAnn, about the difficulty of dealing with bullying and regulating harassment online without stifling speech. The reading I've done on this issue, which has been more about harassment of women and not children, highlights the need for enforcement of what laws we do have. It's not that we need more laws, it's that we need the existing ones to be understood in the context of the Internet and to be enforced by the authorities. Amanda Hess wrote a really wonderful piece about her experience with this that I think I mentioned during one of the first weeks of class, which is long but well worth the read. [1] Lindy West wrote a follow up for Jezebel [2], which gives a quick overview and her own commentary. Jkelly 12:43, 18 February 2014 (EST)

Thank you for sharing these sources! It is refreshing to see how more people are getting involved in spreading the message about cyber-bullying and I believe communication and public awareness initiatives are crucial in combating these issues, particularly in targeting the most vulnerable and dominant population on the web (the youth). The modern parent has more to consider in raising children with regular access to the cyber world both from the perspective of the victim and in preventative measures. A recent pew survey noted that 90% of teens had witnessed cyber-bullying yet did nothing about it. Imagine how many lives would be saved if everyone took a stand against cyber-bullying. Then again, I suppose the children did not know what to do or who to report their observations to; one might think to inform the student's parents but perhaps the teen did not know the parents? What action could this 90% of teens have taken? Call the police and on what grounds? At first blush, 90% of teens not reporting bullying seems like an awful statistic, but when considering the lack of direction or guidance in knowing (as a society) how to deal with these matters legally, it all trickles down and muddles the situation to the point where a concerned citizen may not be able to effectively help his fellow cyber-victim. In any event, without communication, these teens may not even recognize cyber-bullying to begin with and may become "desensitized" to a point where it may not even cross their mind. Communication is critical for our community to even be aware of what goes on in cyberspace, but as Jkelly mentions, all of the communication and education still cannot trump the lack of enforcement or clear legal path on dealing with these issues.

Has anyone seen the documentary "Submit"? It was created by parents of internet-bullying victims and the production discusses just how dangerous the bully's "arsenal" online has become when considering how one can, at worst case scenario, completely destroy someone's social standing, career, and identity. The "arsenal" they say is dangerous because it is both "vast" and "at a distant" offering a bully the prime environment in which to operate.

Here is the link for the documentary for those interested: http://www.submitthedocumentary.com/

--AmyAnn0644 14:08, 18 February 2014 (EST)




While I find Susan Benesch's pursuit of a more nuanced definition of free speech quite commendable, I find that her definition of dangerous speech is prone to subjective assessment and can lead to excessive censorship. Some of the factors, such as the charisma of the speaker, are difficult to assess and are shared between speakers for bad and good causes. Other factors, such as historical context, are equally less than ideal as history is not a constant, a fact, but rather something defined by the state and current generation based on its limited knowledge of the past and current view of the events. The way we see and interpret history changes virtually every decade, and it would be nice if the view of what constitutes dangerous speech was not tied to such an uncertain factor. --Seifip 08:11, 18 February 2014 (EST)

Great points, Seifip, and I suspect Susan would agree with you that there is still a gap between what factors should and shouldn't matter, and how that translates to policies, procedures, and rules for monitoring against dangerous speech. The tie between the substantive and procedural issues around freedom of expression is a fascinating place to explore at some depth. Andy 11:15, 18 February 2014 (EST)




I found the reading this week really interesting as I am from the country that pioneered Internet censorship, China. To be exact, I am from Hong Kong, one of the Special Administrative Regions of China. For those who are not familiar with the history of Hong Kong, it used to be a colony of Britain and China resumed sovereignty in 1997. Hong Kong is under the principle of “One County, Two Systems”, which means that it has a different political, legal and economical system from China and will be maintained that way for at least 50 years.

Facebook, Twitter, New York Times have been on the blocked websites list in China because they are “politically sensitive”. Instead, they created their own social networking tools, Weibo. There are a couple different Weibo that launched by different companies, but all of them are in cooperation of the Internet Censorship in the People’s Republic of China.

WeChat,a popular messaging app for smart phone which is similar to WhatsApp, Line, Facebook Messenger etc, is also under censorship. Messages that contain some keywords will be filtered and blocked. Users who send those messages will receive a message saying” The message you sent contains restricted works. Please try again”.

In September last year, The Chinese Government finally allows a small selection of people to access those banned websites including Facebook and Twitter. However, the small selection of people means people that live in that specific 17 square mile area of Shanghai. Many say this is a great start of the revolution, but I am not as optimistic as the rest. I do acknowledge the changes that have been made in years, however, I believe this incident is only a one-time exception that the government made.

Jolietheone 03:13, 18 February 2014 (EST)




WHY WE CANNOT TRUST EVERYTHING ON THE SOCIAL MEDIA: OF FREE SPEECH AND LIES

http://news.asiaone.com/news/singapore/pm-lee-untruths-spread-through-social-media-hard-correctE

But rather than other people or web robots doing the filtering, we should be teaching our young people how to filter good and reliable information from bad ones, especially on social media.

Ichua 11:53, 18 February 2014 (EST)

Ichua - I could not agree more! It is hard enough for educated adults to filter through the propaganda spewed on the web; I can only imagine how a child would struggle with this. Even the most reputable websites have had instances where misinformation or biased information has been reported. Educating our youth on cyber material will make or break our country (and world's) future both within the cyber world and the real world.

--AmyAnn0644 08:33, 19 February 2014 (EST)



Following up on Andy and Castille's comments regarding content review and concern over the speed of content removal, I found Rachel Whetstone's entry about Google's policy regarding free expression and regulating speech particularly interesting. Whetstone emphasizes the importance of community, and the relative speed and accuracy of hate speech/ inappropriate content regulation by the millions of google users who self-police their given online communities. She acknowledges the potentially problematic dynamic of subjective judgment of what is deemed inappropriate, but I strongly agree that the majority of users- especially those who actively and regularly engage in any number of online communities- will agree about what is acceptable and what is offensive. Castille brought up concerns over cyber bullying and parental supervision/ intervention-- I would hope that the majority of parents would have similar responses to what is deemed unacceptable content when they encounter it. Though the ability to consider, deliberate and process each case of potential content regulation or removal is indeed limited when the average content review period on platforms such as Facebook is 20 seconds (referenced by Andy), I still would trust the ability of a community of regularly engaged and informed reviewers to regulate appropriate content. akk22 11:50, 18 February 2014 (EST)


While self-policing within a given online community is an ideal way of regulating instances of hate speech, this clearly does not always happen. Partly because citizens may not know how to police such behavior and also because the internet is such a vast sphere that human regulation in its fullest extent has become somewhat unrealistic even if every cyber-goer were moral and acted upon such values. The greatest concern is how many crimes (particularly school shootings) could have been prevented if officials would or could do more to act on the "warning signs" often present on a teenager's social media sites.

A recent article below posted by Staten Island discusses how lack of proactive policy has obstructed investigations. For example, if a student is reported for violent content posted online, it is solely up to the discretion of the school Principal to take action or dismiss the behavior as child's play; this is true even if an explicit threat is made. In one instance a threat was posted and the Principal chose to ignore it because he/she did not know what could be done. This is an issue because a Principal is not formally trained in law enforcement and making these types of decisions comes with an enormous amount of responsibility. In the case of the article below, law enforcement stepped in and conducted an interrogation determining that the posting was nothing more than a hoax. Determining this, however, would be extremely difficult for a Principal without the tools and training of a law enforcement officer.

http://www.silive.com/news/index.ssf/2014/02/post_716.html

--AmyAnn0644 13:53, 20 February 2014 (EST)




NEW IDEA - ONLINE SOFTWARE FOR BUILDING THE COUNTRY FROM COLLABORATIVE FREE SPEECH

I am thinking of Soft Systems approaches used in operations research such as the use of "cognitive maps" described by Colin Eden (UK). If there is an issue of national interest, we could have every interested person contribute to an interactive online cognitive map which has a "revert-to-earlier-version" function like in Wikipedia. That way whoever contributes would have a sense of ownership of the map. Positive or negative influence of one factor on another can be indicated by "+" and "-" signs and strength of relationship can be shown with line thickness of the arrows. The contributor's name and his reasons or evidence for the added link could be displayed by clicking on the connecting arrow. Well, this idea is not really new as Colin Eden had developed a software for this called COPE...but this will need to be enhanced with the additional features suggested.....Also, if one contributor says "A ---->+ B" and another disagrees, the map could be modified with a second link from A to B as "A ---->+ C ----> -B", while still retaining the original link. Most probably a detailed read of the description of the first link would lead one to suggest "A ----> -D ----> +B" as a replacement for the original link. Thus, the map will give us a "richer" picture of the elements affecting a particular issue as new links are added.

See: "Using Cognitive Mapping for Strategic Options Development". ( in 'Rational Analysis for a Problematic World', Jonathan Rosenhead (ed.)). Wiley 1989.

Ichua 12:15, 18 February 2014 (EST)




In related news... Team GB want social media protection --Seifip 12:16, 18 February 2014 (EST)




In reviewing the readings for this week, and digging deeper into the subject area; I walked away with a true appreciation of a topic that I believed was easily defineable. Perhaps this is indicative of the escalated polarization of issues and beliefs that we are currently experiencing. Bensech's concern about dangerous speech made an argument that I welcomed to entertain. After more thought, I began to question the notion of censorship and the ultimate guideline for who decides what is acceptable. I am uncomfortable with any corporation placing limitations on private speech. I am more comfortable with the cultural norms of the local community self-regulating. Realizing this may not be perfect, to err on the side of the collective conscious seems a much better path to civility.VACYBER 13:33, 18 February 2014 (EST)




Observing the behavior of current providers and government leaders positions about the content of information, I see that there are not, and probably never will be, absolutely effective legal or technological mechanisms to control content on the Internet. If the issue were simple , all undesirable socially behaviors that occur in network - the dissemination of child pornography , intellectual property infringement , manifestations of racial hatred, and many others - would be ceased a long time ago. I agree that technological control mechanisms achieved by providers, for example, the one which can do a simply edit an information available on web site in order to remove or correct any references that cause damage; may also erase the contents of a given page or even remove files from the server that you use to store your information . Being a common and effective means of control, once the content provider is one who exercises direct control over the information or files available on the respective web site or server and may take steps to remove or block access to infringing material . Therefore , it is up to the judge to determine the adoption of reasonable technical mechanisms together with all other support measures that may be useful to obtain specific performance or equivalent practical result . The implementation of drastic measures to control content on the Internet should be reserved for extreme cases, when this obvious public interest provided that the weighted potential damage caused to third parties, should not be adopted in other cases , especially when dealing with individual interest except in very exceptional situations, which represent rare exceptions . The difficulties inherent in the protection of rights within the Internet can cause some perplexity . However , it is a reminder : the network is a reflection of society and , as such , imperfect and subject to injustice . If until today was not possible tutelary with absolute perfection all rights provided in a legal system, we would be innocent to expect different results us internet related conflicts . The documentary about Mark Zuckerberg describes about the challenges faced by faced by Facebook regarding the control of content. http://youtu.be/5WiDIhIkPoM Gisellebatista




Professor Sellars, I feel that your "Structural Weaknesses" piece adequately addressed many key issues surrounding internet censorship of speech, especially the fact that extensive private regulation already happens among several different parties. I also especially liked your astute observation that the tragic Benghazi situation was far more nuanced than simply one person posting a video to YouTube; there were many pre-existing societal issues at play. I do have a one question about the piece, though: When writing about how the White House requested that YouTube remove the video, you opine that the White House did so "very inappropriately." Are you saying that the manner in which the White House made the request was inappropriate, or was it inappropriate for the White House to make such a request at all? I'm genuinely curious to know what you think, seeing as how this request seems to involve the "bully pulpit" aspect of the President's executive branch, which in this case uses speech in order to regulate speech. Vance.puchalski 15:19, 18 February 2014 (EST)


Thanks for reading, Vance! My view on this fluctuate a bit, but I tend to be very concerned with government engaging in censorship through "soft power" means like this - asking YouTube to rethink a decision, cutting off payment providers, etc. - when they lack the constitutional authority to punish or enjoin it directly. I would be less concerned if they simply exercised their speech to say they disagree with the video, and maybe even YouTube's decision to keep it up. But to exercise pressure on a domestic intermediary crosses a line for me. For more on this, check out Jack Balkin's writing on "old school" vs. "new school" speech regulation. Andy 16:35, 18 February 2014 (EST)
Vance, I had wanted to ask the same question! And I like Dr. Seller's respond. But in Singapore, any negative implication about the government cannot be tolerated. Politicians in the past who speak negatively and aggressively about the government and questioning the integrity of the government, particularly of the Prime Minister or Deputy Prime Minister without clear evidence, were usually put in jail or made bankrupt as in the case of J.B. Jeyaratnam. That is why Singaporeans prefer to express their views anonymously and in the social media, but even then it could be a dangerous thing to do as it may affect their career, their family, their life. The majority would prefer to remain quiet or share their opinions only amongst very close friends and relatives. I used to have lots of respect for LKY but his treatment of JB Jeyaratnam seemed overly harsh and unnecessry, though this pales in comparison with wicked governments/leaders who execute their oppositions. Yet I wonder if the opposition could have been more tactful in their approach. In some countries it may seem an acceptable way to attempt to remove a leader by smearing his character. In another, the leader expects to be treated like a god and feels threatened when his character and integrity is called into question. Ichua 20:15, 18 February 2014 (EST)


Re: Internet censorship, first video: I was particularly surprised that Google provides (or provided) near realtime indicators of takedown notices and censorship, country to country. It would be interesting to look into potential backlash to this, on a country-by-country basis, on the corporate or governmental level. (Though, perhaps Google has grown so large that it has cultivated a bit of immunity?) That being said, later in the video it did mention that corporations (theoretically even Google) are on “their turf” and have no choice but to comply.

A theme that keeps coming up in these readings, and in class, is that our perception of our freedoms on internet seem to be skewed. It’s almost inherent in our dealings with the net. Why do we, generally speaking, have this idealized view? An example could be user-created content — websites, commenting, user-focused platforms, etc. etc. In general, user driven content… the fact that anyone can theoretically add to the web space, with relatively low visibility. This could be leaving us with this idea that the web is truly open. The wide availability of pornography actually comes to mind as a decent example… In that, if this real-world, regulated material is so widely available online, then the net must be a “free” space.

Re: Dangerous speech vs. hate speech: While watching this video it occurred to me — when she was speaking about not needing to limit the hate speech itself — that the internet provides people with such wide access to information across the globe, so that this hate speech could be accessible in a volatile area, thus making it also dangerous speech. She didn’t mention that fact, but perhaps I missed it in another reading…. It strikes me that it would be hard to define this based on territory and context, given widespread access to the web. Twood 15:25, 18 February 2014 (EST)


I have to mention that the most interesting reading material was "Delete Squads". I am sharing the views of Deciders on ground that



Free speech and the Internet have been intermingled in speech from the very beginnings of online interaction. But as the Internet has developed , so have people’s opinions about what rules and regulations apply from the “real world”.

Major corporations policies and outlooks (like Techs from silicon valley) on free speech have shaped how people post and express themselves over the web. But major issues have arisen from foreign countries differing like France demanding Twitter to hand over the identities of users who promoted hate speech. Even Google censors of a video, even though it is only in a few particular countries, caused much concern or like when Google image search for Tiananmen Square shows starkly different results from a search in the US or a search from China. China, Russia and other totalitarian countries have differing ways to effectively filter free speech on the Internet. These range in form but generally are DDoS attacks, hacking, intrusion filtering by key words, or flooding blogs with pro-government agenda even shutting down their internet for a time. What is the best way to keep speech on the Internet censorship free? There are many answers. One is way is circumventing tools, many have been developed but they have proved of little use in the over all struggle. It seems the best way is to have the giants of the Internet have a greater role in participation paving the way for generations to come.

The discussion between John Palfrey and Adam Thierer about 47 U.S.C. Section 230 addresses some of the most contentious aspects of law in such a clear concise way. Should online carriers be liable for content posted on their websites or web browsers? Each provides compelling reasons for their view. Adam Thierer argues in favor of keeping Section 230 intact showing how crucial it has been to the development of the Internet. Although John Palfrey agrees, his view is to make stricter demands on ISPs and “interactive computer service providers”. Through out their discourse they touch upon tough issues like negligence claims, increased government involvement, litigation, and child obscenity laws. Emmanuelsurillo 15:57, 18 February 2014 (EST)




Per Zuckerman's article, it is my conviction that "Internet Censorship" in many countries sometimes reflect the level of such countries' tolerance for "Freedom speech." The article gave the examples of China, Saudi Arabia, Tunisia, and Zimbabwe, just to mention but a few. These countries' censorship of certain websites, blogs, "sensitive keywords," or the sharing of specific information are motivated by various reasons. However, the restrictions to their state-sponsored ISPs have also forced web hosting services such as BlueHost as well as American internet giants such as LinkedIn, Google, Microsoft, and others to comply with the governments of those countries in order to do business there.

Personally, as much as I agree with Zuckerman's analysis on certain governments' censorship of the internet in relation to freedom of speech, I also believe that the financial motivation is equally important to some of them. For instance, I recently traveled to the UAE and Kuwait for business. One of the most noticeable internet censorship that struck me the most was how both countries blocked messaging applications such as Skype and Viber. I noticed that one cannot download those apps while in those two countries. It's either that you download Skype and Viber from the U.S., before you leave, or you will not be able to do so once there. Is this censorship to restrict "freedom of speech" or to protect their local telecommunications companies? cheikhmbacke 15:55, 18 February 2014 (EST)




I have to mention that the most interesting reading material was "Delete Squads". I am sharing the views of Deciders on ground that the internet is the independent space of sharing thoughts, information, data and etc. which is the legal base of freedom of speech. But, if we imagine for a moment, that our future will be considerably affected by Internet, so we should be more careful. After reading all relevant materials, I came to the following conclusion on this topic: - The Internet must totally be the independent space and everyone will be able to exercise his freedom of speech without any restrictions other than those set by legislation. - The posted content which breaks the requirement of legislation should be withdrawn/deleted immediately by the website after they become aware of this fact. Otherwise, the role of legislation may be undermined. - If the posted content breaks the legislation of certain country, the access of this particular country must be restricted to this content upon relevant request. - The user who had posted the content which is incompliance with legislation, must be held liable. The websites and such companies as Google, Twitter and Facebook etc. can held liable only in cases if they knew about this violance, but failed to take appropriate measures (for example, other users informed the website about breach, but the website didn't delete it). So, in this case the websites cannot be held liable for any breach of legislation by users, if they prove that they didn't know this fact. Otherwise, the websites will be "gatekeepers" of content in the future in order to avoid liability.

Consequently, I think that there should be compromise between freedom of speech and restrictions on it which are set by legislation. (Aysel 15:56, 18 February 2014 (EST)) Aysel Ibayeva




Herdict is a very useful tool, not only for filtering and censorship but in general when troubleshooting the reason a site and/or one of it's URL's isn't available. I was wondering why I hadn't heard of this tool or something similar before. Kudos to Jonathan Zittrain! I was also surprised to find out that the U.S. ranked third on the list of censorship by country, since we are a relatively free and democratic nation.404consultant 16:26, 18 February 2014 (EST)




I see the decision of an online service provider to take down content as a cost-benefit one. If a request to remove material could potentially represent legal action with potentially large monetary losses then the OSP incentive is to remove material with little hesitation. On the other hand, there might be situations were free speech is awarded by users and thus represent a benefit for the OSP. Either way OSP’s are in an extremely tight situation with tradeoffs having important implications in terms of free speech.

Luciagamboaso 15:56, 18 February 2014 (EST)



Zuckerman’s article brings interesting light to censorship in regards to freedom of speech and who’s value should take precedence. As discussed in the article, Bluehost’s CEO, Matt Heaton was quick to back the companies financial well-being by adding an entirely new section (13) to an already executed contract and cease service of Burrell’s sites to "comply" with the U.S. Treasury.

Heaton’s decision was most likely based from a financial perspective, and was unwilling to take a chance due to the slight profit margins of hosting. So, to safeguard profits, the company added section 13 to a pre-existing contract. In this case, Bluehost value took precedence (Not the U.S. Treasury or the sites Burrell happened to be hosting through Bluehost).

Question: Does Burrell have the ability to pursue legal action against Bluehost for adding a clause to her contract after the original agreement was executed?

--Melissaluke 14:39, 18 February 2014 (EST)


Freedom of speech should be preserved online. However, I do agree that hate speech especially one that poses an immediate threat to someone’s safety should be interfered. Reporting of such occasions and policing should be in place to protect people’s safety, however, the problem is who can be trusted with such an enormous task. Government might not be appropriate because it may easily lead too much politically-influenced filtering. It’s an interesting topic and hopefully we can find a way to affectively share ideas but also staying within some sort of safety boundary. Lpereira 15:34, 18 February 2014 (EST)