Harmful Speech Online: At the Intersection of Algorithms and Human Behavior
A workshop exploring how algorithms and human behavior interact to spread harmful speech online, but also to enable novel responses to it
On June 29-30, 2017, the Berkman Klein Center in collaboration with the Shorenstein Center on Media, Politics and Public Policy and ISD Global held a workshop that explored how algorithms and human behavior interact to create new problems related to harmful speech online, as well as new solutions and responses. Over 60 experts from a variety of disciplines, sectors and geographies met at Harvard Law School to reflect on these issues and to generate ideas for how society might seek progress.
A summary of the event can be found on Medium.
The event was held at Harvard Law School and convened over 60 stakeholders from academia, civil society organizations, and major technology companies—including Facebook, Twitter, Google, and Microsoft—and featured talks, group discussions and breakouts. The hosts included Rob Faris, Berkman Klein Center research director; Urs Gasser, the center’s executive director; Sasha Havlicek, CEO of ISD; and Nicco Mele, director of the Shorenstein Center. The major goal was to foster collaboration and idea-sharing; not release new findings. The event was conducted under the Chatham House Rule; participants were recontacted to provide permission to use quotations or attributions appearing in this report.
Introducing the topic, Faris, Mele, and Havlicek pointed to the enormous gap—in terms of resourcing, activism, and even basic research—between the problems of harmful speech online and the available solutions. Harmful speech and extremism in online spaces can have an enormous impact on public opinion, inclusiveness, and political outcomes. And as Mele put it, we are in “uncharted territory” when it comes to addressing these problems, which underscores the importance of convening groups from academia, civil society, and industry to address the challenges.
Read our Medium post