Skip to the main content

From misogyny and homophobia, to xenophobia and racism, online hate speech has become a topic of greater concern as the Internet matures, particularly as its offline impacts become more widely known. And with hate fueled tragedies across the US and New Zealand, 2019 has seen a continued rise in awareness of how social media and fringe websites are being used to spread hateful ideologies and instigate violence.

Through the Dangerous Speech Project, Berkman Klein Faculty Associate Susan Benesch studies the kinds of public speech that can catalyze intergroup violence, and explores the efforts to diminish such speech and its impacts while protecting the rights of freedom of expression.  Like the Center’s own work examining the legal, platform-based, and international contours of harmful speech, the Dangerous Speech Project brings new research and framing to efforts to reduce online hate and its impacts.

This work often involves observing and cataloging extremely toxic speech on social media platforms, including explicit calls for violence against vulnerable populations around the world. But dangerous speech researchers also get to interact with practitioners of “counterspeech” - people who use social media to battle hateful and bigoted messaging and ideology.

The Dangerous Speech Project’s Senior Researcher Cathy Buerger convened a group of counterspeech practitioners at RightsCon 2019 to talk about the most effective counterspeech efforts. Here she reflects on these efforts, and how activists can better combat hate in online spaces and prevent its offline impacts.
 

How has social media facilitated the proliferation of hatred/harmful speech? Do you think there is more hate today as a result of Internet-enabled communication, or is it just more visible and noticeable?

It’s hard to say if there is more hate in the world today or not. My instinct is no. At the Dangerous Speech Project, we’ve examined the speech used before incidents of mass violence in various historical periods, and the rhetorical patterns are remarkably similar. The hate that we see today is certainly nothing new.

But there are some new factors that impact the spread of this hate. First, social media makes it relatively simple to see speech produced in communities outside of one’s own. I’m an anthropologist, so I’m always thinking about how communities set and enforce norms. Different communities have divergent opinions about what kind of speech is considered “acceptable.” With social media, speech that might be seen as acceptable by its intended audience can easily be discovered and broadcast to a larger audience that doesn’t share the same speech norms. That audience may attempt to respond through counterspeech, which can be a positive outcome. But even if that doesn’t happen, at the very least, this speech becomes more visible than it otherwise would have been.

A second factor that is frequently discussed is how quickly harmful messages on social media can reach a large audience. This can potentially have horrifying consequences. Between January 2017 and June of 2018, for example, 33 people were killed by vigilante mobs in India following rumors that circulated on WhatsApp suggesting that men were coming to villages in order to kidnap children. The rumors were, of course, false. In an effort to battle these kinds of rumors, WhatsApp has since placed a limit on how many times a piece of content can be forwarded.

These are just two of the ways that technology is affecting the spread and visibility of hateful messages. We need to understand this relationship, and the relationship between online speech and offline action, if we are going to develop effective policies and programs to counter harmful speech and prevent intergroup violence. 

You've spoken with a number of folks who work online to counter hateful speech. What are some of your favorite examples?

There are so many fascinating examples of people and organizations working to counter online hateful speech. One of my favorites is #Jagärhär, a Swedish group that collectively responds to hateful posts in the comment sections of news articles posted on Facebook. They have a very specific method of action. On the #Jagärhär Facebook page, group administrators post links to articles with hateful comments, directing their members to counterspeak there. Members tag their posts with #Jagärhär (which means, “I am here”), so that other members can find their posts and like them. Most of the news outlets have their comments ranked by what Facebook calls “relevance.” Relevance is, in part, determined by how much interaction (likes and replies) a comment receives. Liking the counterspeech posts, therefore, drives them up in relevance ranking, moving them to the top and ideally drowning out the hateful comments. 

The group is huge – around 74,000 members, and the model has spread to 13 other countries as well. The name of each group is “#iamhere” in the local language (for example, #jesusilà in France and #somtu in Slovakia). I like this example because it demonstrates how powerful counterspeech can be when people work together. In the bigger groups (the groups range in size from 64 in #iamhereIndia to 74,274 in #Jagärhär), their posts regularly have the most interaction, and therefore become the most visible comments.

One of the questions that I am interested in right now is how counterspeaking as a group may serve as a sort of protective factor for group members. I’ve interviewed lots of counterspeakers, and most of them talk about how lonely and emotionally difficult the work is – not to mention the fact that they often become the targets of online attacks themselves. In the digital ethnography that I am working on right now, members of #iamhere groups frequently mention how working as a group makes them feel braver and more able to sustain their counterspeech work over time. 

I’m also very interested in efforts that try to counter hateful messages by sharing those messages more widely. The Instagram account Bye Felipe, for example, is dedicated to “calling out dudes who turn hostile when rejected or ignored.” The account allows users to submit screenshots of conversations they have had with men – often on dating sites – where the man has lashed out after being ignored or rejected. I interviewed Alexandra Tweten, who founded and runs the account, and she told me that although she started it mostly to make fun of the men in the interactions, she quickly realized that it could be a tool to spark a larger conversation about online harassment against women. A similar effort is the Twitter account @YesYoureRacist. Logan Smith, who runs the anti-racism account, retweets racist posts that he finds to his nearly 400,000 followers in an effort to make people aware that the racism exists. 

Broadcasting hateful comments to a larger audience may seem somewhat counterintuitive because we are frequently so focused on deleting content. But by drawing the attention of a larger audience to a particular piece of speech, these efforts can serve as an educational tool - for example, showing men the type of harassment that women face online. By connecting a piece of speech with a larger audience, it is also very likely that at least some members of that new audience may not share the same speech norms as the original author. Sometimes, this is primarily a source of amusement for the new audience. At other times, though, it can be a quick way to inspire counterspeech responses from members of that new audience.

Why do you think these efforts are effective? What can folks who work in counterspeech efforts learn from one another? 

Effectiveness is an interesting issue. The first thing we have to ask is “effective at doing what?” One of the findings from my research on those who are working on countering hatred online is that they don’t all have the same goal. We often think that counterspeakers are primarily trying to impact the behavior or the views of the hateful speakers to whom they are responding. But of the 40 or so people that I have interviewed who are involved in these efforts, most state that they are actually trying to do something different. They are trying to reach the larger reading audience or have a positive impact on the discourse within particular online spaces. The strategies that you use to accomplish goals like that are going to be very different from those you might use if you are trying to change the mind or behavior of someone posting hateful speech. The projects that are most effective are those that clearly know their audience and goals and choose their strategies accordingly.

Last November, we hosted a private meeting in Berlin of people who use various methods to respond to hateful or harmful speech online. This group of 15 counterspeakers from around the world discussed counterspeech best practices and the challenges that they face in their work. After the workshop, we heard from many of them about how useful the experience had been because they no longer felt as isolated. The work of responding to hatred online can be lonely work. Although some people do this work in groups – like those involved in the #iamhere groups – most people do it by themselves. So, of course, counterspeakers can learn a lot from each other in terms of what kinds of strategies might work in different contexts, but there is also tremendous potential benefit in getting to know one another simply because it reminds them that they are not alone in their efforts. 

What did your group at RightsCon learn from one another? Did any surprising or exciting ideas emerge?

One of the best parts about RightsCon is that it brings people together from different sectors, from all over the world, who are working on issues related to securing human rights in the digital age. During our session, which focused on online anti-hatred efforts, one of the topics that was raised by both the session participants and several audience members was just how hard this work can be – the toll it can take on a person’s personal and emotional life. At one point, an audience member asked Logan Smith (of @yesyoureracist) whether he had ever received a death threat. He answered “oh yeah.” People laughed, but it also really brought home the point. This is really tough work. It’s emotionally demanding. It can make you the target of online attacks. One seldom gets that perfect moment where someone who had posted something hateful says “oh, you’re right. Thank you so much for helping me see the light.” An online anti-hatred effort is successful if it can reach its goal, whether that goal is to reach the larger reading audience or to change the mind or behavior of person posting hateful comments. But to do any of those things, it has to be sustainable. So I think that learning more about what helps counterspeakers avoid burnout and stay active is an important piece of better understanding what makes efforts effective in the long run.

You might also like


Projects & Tools 01

Harmful Speech Online

The Berkman Klein Center for Internet & Society is in the third year of a research, policy analysis, and network building effort devoted to the study of harmful speech, in close…