This study, conducted by researchers at the Berkman Klein Center, assesses the degree to which English-language Wikipedia is successful in moderating harmful speech. The analysis is informed by two complementary approaches: interviews with 16 Wikipedia editors about the processes and guidelines for content revision, content deletion, and quality control of English Wikipedia, and a quantitative content analysis of Wikipedia text using machine learning classifiers trained to identify several variations of problematic speech.
The researchers conclude that Wikipedia is largely successful at identifying and quickly removing a vast majority of harmful content despite the large scale of the project. The evidence suggests that efforts to remove malicious content are faster and more effective on Wikipedia articles compared to removal efforts on article talk and user talk pages. An ongoing challenge for Wikipedia is moderating and addressing harms in the interactions between community members. Wikipedia’s decentralized structure empowers editors to act quickly and decisively to interpret what constitutes damage and harm on the encyclopedia. Decentralization also results in inconsistency in the application of the guidelines and standards that guide moderation decisions, although this appears to be offset by the benefits and agility of autonomous context-guided decision making.
To help ground and guide the analysis, the study includes several complementary components: a review of approaches and models for governing content online in the US, EU, and India, an exploration of the complexities of defining harmful speech, a taxonomy that highlights the many overlapping forms of harmful speech, and a summary of the ways in which different forms of harmful speech map against the guidelines and policies that shape conduct and content on Wikipedia.