Research Sprints and Pilot Projects

Since the launch of the joint initiative, the Berkman Klein Center and MIT Media Lab teams have engaged in a series of research sprints and pilot projects to explore and learn, experiment and iterate, build and study, and engage and network. These efforts are focused on application of research to important and often pressing issues at the intersection of AI technologies, governance, and ethics. A selection of the projects, from which the initial set of core use cases emerge, are included here:

  • CivilServant: The CivilServant (civilservant.io) project supports online communities in running their own experiments on the effects of moderation practices, enabling them to test efforts at influencing or persuading AIs to change their behavior towards the common good, even when their training data or internal design cannot be changed.

  • VerifAI: The VerifAI project aims to build the technical and legal foundations necessary to establish a due process framework for auditing and improving decisions made by artificial intelligence systems as they evolve over time. This work is directed at the deeply flawed software that has been deployed within the criminal justice system to aid judges in the sentencing of criminal defendants.

  • BayesDB: BayesDB is open-source AI software that lets any programmer answer data analysis questions in seconds or minutes with a level of rigor that otherwise requires hours, days, or weeks of work by someone with good statistical judgment. This project empowers domain experts to solve problems themselves, and allows users to easily check results quality by comparing inferred relationships to common-sense knowledge and comparing model simulations to reality and expert opinion.

  • AI Challenge Mapping: Through a series of expert workshops and advanced horizon scanning and foresight methods, this foundational project is developing an overall mapping of key ethical and governance challenges in the AI space for purposes of enhancing our collaborative network and identifying areas ripe for impact. Outputs include translational tools, such as a visual navigation aid mapping people/institutions, research, and existing efforts related to the ethics and governance of AI initiative 
that bridge key technology and policy concepts.

  • AI and Global Governance: Given the cross-border impact of AI and related technologies, what are appropriate and workable governance mechanisms that can operate at a global scale? Leveraging insights from research looking into innovative governance models in the Internet realm, this project examines the application of such models to AI systems through a series of case studies and working meetings in the US, Europe, and Asia. Outputs include a policy report, recommendations, and an international working group.

  • AI and Inclusion: Bringing together a diverse group of stakeholders, this pilot examines the intersection between AI and inclusion, and explores the ways in which AI systems can be designed and deployed to support diversity and inclusiveness in society. While promoting learning opportunities through engagement in events, learning calls, and joint research activities, the pilot is making educational resources and other outputs accessible via a variety of platforms. Thematically, an emphasis has been placed on the impact of AI on underserved groups – whether in terms of age, ethnicity, race, gender and sexual identity, religion, national origin, location, skill and educational level, and/or socioeconomic status – and on how these communities think about AI systems.

  • Lensing and Machine Learning: This project focuses on humanizing and democratizing statistical machine learning with the goal of generating practical toolkits to apply to hard societal problems. The research involves lensing, a mixed-initiative technique to represent perspective in generative machine learning models. It draws inspiration from branches of philosophy, namely epistemology, ontology, phenomenology, and axiology, as well as work with probabilistic graphical models and their inference algorithms, Bayesian statistics, causal inference and experiment design, probabilistic programming (particularly IDEs for machine learning), and human-computer interaction.

  • Society-in-the-Loop: The aim of this research is to bridge the cultural divide between the technical disciplines that build AI and the humanities and social science disciplines that study human nature and values, thereby shaping the algorithmic social contract. The work involves a variety of tools and concepts, from behavioral economics and moral psychology, to complex systems and machine learning, to ethics and political philosophy. This research enables us to quantitatively understand the societal challenges posed by AI’s development, build models of AI’s social dynamics, inform policy recommendations to shape those dynamics, and engage policymakers and the public via outreach and novel interactive tools.

Related Berkman Klein Center Research and Activities:

  • Harmful Speech Online: At the Intersection of Algorithms and Human Behavior: The challenge of addressing harmful speech has become an increasingly salient and uniquely complex issue for a wide variety of stakeholders, including academia, civil society, journalism, industry, and the public at large. Harmful speech and hate speech are not new phenomena. However, the movement of harmful speech to networked online spaces presents new challenges and opportunities for those seeking to limit its propagation or mitigate its negative consequences. Increasingly, algorithms are being used to help identify and address harmful speech, and understanding the contours of how algorithms and humans interact and affect each other is central to enabling stakeholders to more fully comprehend the harms and problems presented by harmful speech online, and to design higher quality deterrents and interventions. Within this context, the Berkman Klein Center, in collaboration with the Institute for Strategic Dialogue and the Shorenstein Center for Media, Politics and Public Policy, hosted a workshop in June 2017 that broadly explored harmful speech online as it is affected by the intersection of algorithms and humans, and the creation of networks and relationships among stakeholders at this nexus. Additionally, in collaboration with the Center for Civic Media at MIT, the Berkman Klein Center developed Media Cloud, an open source platform that tracks millions of stories published online, using empirical approaches to study the media ecosystem and helping us better understand AI’s role in propagating or counteracting harmful speech.

Last updated

August 29, 2017