Skip to the main content

This Q&A, released in conjunction with a series of papers in December 2016 and written by the Harmful Speech Online team, aims to describe some of the issues surrounding harmful speech online, how the Berkman Klein Center is involved in researching these issues, and offers answers for some frequently asked questions on the topic.

What is harmful speech?

Harmful speech consists of a range of phenomenon that often overlap and intersect, and includes a variety of types of speech that cause different harms. The most familiar type is hate speech, which commonly refers to speech which demeans or attacks a person or people as members of a group with shared characteristics such as race, gender, religion, sexual orientation, or disability.

In one of the papers the Harmful Speech project at the Berkman Klein Center recently released, Andrew Sellars examines prior efforts to define hate speech, offers an in-depth examination of the theoretical context of hate speech and summarizes emerging themes in the discussion and scholarship of hate speech online.  

An alternative framing—online harassment—is defined by Amanda Lenhart and collaborators as “unwanted contact that is used to create an intimidating, annoying, frightening, or even hostile environment for the victim and that uses digital means to reach the victim.”   The Women’s Media Center describes how harassment online may include a variety of tactics—from doxxing to revenge porn to gender-based harassment and beyond—that impact targets in legal, physical, emotional, and in other consequential ways.  

Focusing on a narrower subset of harmful speech, Susan Benesch defines dangerous speech as that which increases the risk of violence through a range of rhetorical techniques (e.g. instilling fear by warning of impending threats) and may contain explicit threats or incitement to violence.  

There are many forms of speech that can cause harm that are not typically included in discussions of hate speech, for example incursions on privacy, violent extremism online, or financially motivated attacks. In our own work, we are interested in speech that is motivated less by financial gain and more by speech that to varying degrees is in conflict with social norms.

While these categories are helpful for defining harmful speech for research, they may come with a cost.  Deciding what counts as harmful speech is challenging and sometimes implies that we can know what it is before we see it. In reality, context can change the meaning of words and their effects.  In fact, as Sellars’ research states, legal definitions of hateful speech are difficult to import into an online context because the contextual intent of an online user is so difficult to discern.

Why is it important to study this issue?

Harmful speech online is an increasingly prevalent issue for internet users, social media platforms, policy-makers, and governments. Incidents of harmful speech have been prominent in recent years. Recent examples, to name just a few, have included incidents both during and after the United States election; following Brexit in the United Kingdom; and in response to comments made by Philippine President Duterte. Although awareness of the issue is growing, victims of harmful speech online have limited means of redress.

Andy Sellars describes how from a policy and scholarly view, the distinction between private and public speech is less clear online and heightens the challenge of addressing harmful speech without infringing on free speech rights. In the future, studying harmful speech online may become even more complicated, as Sellars’ notes increasingly confused standards of what can even be considered “harmful” evolves with an expanding online audience.

What are some of the ways the Berkman Klein Center is researching issues of harmful speech?

This past year, BKC launched a research, policy analysis, and network building effort devoted to the study of hate speech, in close collaboration with the Center for Communication Governance at National Law University in New Delhi, the Digitally Connected network, and in conjunction with Network of Centers (NoC), which also builds on many complementary projects and initiatives, including BKC’s ongoing work related to Youth-Oriented Online Hate Speech / Viral Peace. We are looking at a wide range of issues.

Berkman Klein Faculty Associate Chinmayi Arun at the Center for Communication Governance at National Law University examined top-down strategies and approaches from a law and policy perspective related to issues of hate speech in both online and offline contexts in India. BKC Fellow Niousha Roshani, focused on bottom-up and grassroots efforts undertaken by civil society organizations in Brazil and Colombia in order to address the dynamics and counterstrategies employed regarding racist speech targeted towards Afro-descendant youth. And Andy Sellars, former Cyberlaw Clinic Instructor, put together a paper looking at various Definitions of Hate Speech.

More on our work can also be found in the Networked Policy Harmful Speech research briefing.

We are currently undertaking a number of other research efforts, including utilizing digital media and network analysis tools to examine the discursive practices of white supremacists on Twitter within the context of the 2016 US elections, as well as exploring the prevalence of online offensive speech in Tunisia.

In past years, BKC has looked at harmful speech not only in terms of outcome, but also at method and target. For instance, our Youth and Media team produced a seminal literature review on cyberbullying in 2012, Bullying in a Networked Era and has continued to explore the issue in recent conversations. BKC Fellow Nathan Matias and collaborators helped to spearhead the development of an Online Harassment Resource Guide / Wiki, as well as explore ways to pursue high impact research questions in the field of online harassment. A number of other community members working on efforts broadly related to harmful speech online, including dangerous speech, online harassment, misogyny, racist speech, platform policies, and other types of speech that challenge free expression.

Can legal or governmental interventions be effective tools when trying to address issues of harmful speech?

The India case study shows the limitations of hate speech laws in certain jurisdictions with dealing with harmful speech online.  Legal and policy interventions might reduce incidence, but at the possible cost of chilling legitimate speech and infringing free speech.  Consistent with our findings, it is useful for multiple stakeholders to work on law and policy issues of harmful speech online so that what is often limited by law can be addressed in other spheres.

What is the role of platforms in addressing the challenges of harmful speech online?

Intermediaries and platforms occupy a powerful position as hosts of content, gatekeepers and enforcement agents, and architects and designers of online environments. Researchers have contributed to a number of efforts to document the actions of intermediaries, and in some instances, have played a more active role advocating to and advising companies on policies. Matias has explored the practices and governance structure of volunteer moderators of platforms that are centered on online communities which are actively encountering sub-communities that are organized around topics that may be considered harmful or hateful.  

A number of academics who have studied harmful speech online also sit on Twitter’s Trust and Safety Council, announced in early 2016, which works with representatives from across sectors to prevent abuse on the platform.   The Wikimedia Foundation has also begun to collaborate with industry groups and academics to explore how machine learning techniques may help users and the platform deal with “toxic” speech, under a project called Detox.  These projects represent a small window into the many efforts researchers across sectors are undertaking in order to better understand the phenomenon, dynamics, and difficult questions posed by harmful speech online.

Additionally, as the European Union recently made clear in its “Framework Decision on combating certain forms and expressions of racism and xenophobia by means of criminal law,” platforms have a special responsibility to address hateful speech on their sites. The EU also recently released a “Code of Conduct” for countering illegal hate speech online, based on an initial data monitoring and gathering exercise conducted in fall 2016, and is encouraging platforms to respond more quickly to hateful content.   For further information on the changing governance role of online intermediaries, you can see reports from last year’s Internet Governance Forum where the Harmful Speech project co-hosted a number of different conversations.

What is the role of counter-speech in addressing the challenges of harmful speech online?

Within protected speech, we see the advantages of civil society groups and the private sector in influencing public discourse and countering hate speech online through counter-speech. As the Latin America case study shows, the special advantage to counter-speech is that it may allow targets to assume positive narratives and reframe stereotypes.  Counter-speech also demonstrates that social media may serve to help and build tolerance (see, for instance, this recent case study on Ethiopia that categorized online speech based on levels of toxicity ('offensive', 'hate speech' or 'dangerous speech')).  Instead of broadly viewing the Internet as a place where hate speech proliferates, counter-speech reminds us that the Internet can be a tool to combat harmful speech.

What do we actually know about the issue and what approaches should researchers consider when trying to understand the issue?

Harmful speech online is the product of many forces at play.  Context matters (both online and where you are in the world). In addition to country-specific issues and regulations, we have to look beyond simply online speech, but more generally how speech is presented and recontextualized.  An example from the legal definitions paper suggests that context is so important it may shape what informs work--for instance,  what can we learn from a post , article, or video that in itself isn’t inherently hateful, but changes based on context or sharing over time and based on how it is presented/who presents it? Researchers are also exploring new digital media analysis techniques, like natural language processing and machine learning, to study the phenomenon.

How does the dynamic between online and offline harmful speech factor into debates? Are there specific insights or particular solutions to addressing harmful speech online that we know have impact?

As the Leslie Jones incident exemplifies, online harmful speech, enabled and amplified by social media platforms, can impact offline behaviors.  The Latin America case study also shows that for marginalized targets, online dynamics may exacerbate existing hateful ideas. UNESCO, civil society organizations, and other groups, suggest pursuing offline efforts, including education and building counter-narratives. Article 19 offers a toolkit for identifying hate speech and ways to counter it.

We may also learn from the tactics and techniques of speakers in both in the public/private sphere.  For example, a recent study used counter-speech from Twitter accounts seemingly from the “in-group” that seemed to be more effective than counter-speech from outsiders. (See Tweetment)  

What are the unanswered or hard questions that you seek to answer?

Some important and challenging questions include:
  • How widespread is the phenomenon, who participates, who is harmed, and how?

  • Is it increasing or decreasing, and how does it vary over time? Or is there evidence that the prevalence of harmful speech is steady and only receiving increased attention online?

  • Are there signs of the normalization of harmful speech online? How are the actors that participate in harmful speech organized? How are they influenced by leaders, governments, public figures, and the media?

  • What is the social network structure of groups that engage in harmful speech and what is the role of key influencers? How can we better understand the interplay between ingroup and outgroup interactions?

  • What contextual factors are associated with the incidence, intensity, and impact of harmful speech online?