Research Sprints and Pilot Projects

Since the launch of the joint initiative, the Berkman Klein Center and MIT Media Lab teams have engaged in a series of research sprints and pilot projects to explore and learn, experiment and iterate, build and study, and engage and network. These efforts are focused on application of research to important and often pressing issues at the intersection of AI technologies, governance, and ethics. A selection of the projects, from which the initial set of core use cases emerge, are included here:

  • Algorithms and Justice: As its name suggests, the criminal justice system is not a unified construct but a series of interconnected processes, with multiple entry points and stages of evaluation. Each stage in the development, procurement, deployment, and assessment of each technological tool raises distinct and essential questions that demand a multiplicity of approaches. This project explores ways in which government institutions incorporate artificial intelligence, algorithms, and machine learning technologies into their decision making. The aim is to help the public and private entities that create such tools, state actors that procure and deploy them, and citizens they impact understand how those tools work. Pilots include launching a database of risk assessment tools and building a tool with the town of Chelsea that focuses on identifying and remediating the root causes of crime rather than "predictions" based on correlated factors.
  • Media and Information Quality: Many questions remain about the role of AI in creating, propagating, and combatting media manipulation, and recent events demonstrate the vulnerability of our information ecosystem and our democratic systems. Social media platforms, regulators, and the public alike are struggling to fully understand the sources and depth of the problems, and to develop solutions to manage these issues. This project brings diverse stakeholders together to map the effects of automation and machine learning on content production, dissemination, and consumption patterns, while evaluating the impact of these technologies societal attitudes and behaviors and democratic institutions.

  • AI and Global Governance: Given the cross-border impact of AI and related technologies, what are appropriate and workable governance mechanisms that can operate at a global scale? Leveraging insights from research looking into innovative governance models in the Internet realm, this project levels information asymmetries between the public and private sectors in order to give policymakers the tools they need to effectively navigate a complex, highly technical, and global policy space. Pilots include a Global AI Dialogue Series of working meetings in the US, Europe, and Asia, and the creation of a playbook to guide public-sector decision-makers through AI tools and their applications.  

  • AI and Inclusion: Bringing together a diverse group of stakeholders, this project examines the intersection between AI and inclusion, and explores the ways in which AI systems can be designed and deployed to support diversity and inclusivity in society. While promoting learning opportunities through engagement in events and joint research activities, we are making educational resources and other outputs accessible via a variety of platforms. In November 2017, we co-hosted the three-day Global Symposium on AI & Inclusion in Rio de Janeiro, convening 170 individuals from over 40 countries to identify and address an array of issues involving AI and Inclusion from an interdisciplinary perspective.

  • AI Challenge Mapping: Through a series of expert workshops and advanced horizon scanning and foresight methods, this foundational project is developing an overall mapping of key ethical and governance challenges in the AI space for purposes of enhancing our collaborative network and identifying areas ripe for impact. Outputs include translational tools, such as a visual navigation aid mapping people/institutions, research, and existing efforts related to the ethics and governance of AI initiative that bridge key technology and policy concepts..    

  • Society-in-the-Loop: The aim of this research is to bridge the cultural divide between the technical disciplines that build AI and the humanities and social science disciplines that study human nature and values, thereby shaping the algorithmic social contract. The work involves a variety of tools and concepts, from behavioral economics and moral psychology, to complex systems and machine learning, to ethics and political philosophy. This research enables us to quantitatively understand the societal challenges posed by AI’s development, build models of AI’s social dynamics, inform policy recommendations to shape those dynamics, and engage policymakers and the public via outreach and novel interactive tools 

  • Lensing and Machine Learning: This project focuses on humanizing and democratizing statistical machine learning with the goal of generating practical toolkits to apply to hard societal problems. The research involves lensing, a mixed-initiative technique to represent perspective in generative machine learning models. It draws inspiration from branches of philosophy, namely epistemology, ontology, phenomenology, and axiology, as well as work with probabilistic graphical models and their inference algorithms, Bayesian statistics, causal inference and experiment design, probabilistic programming (particularly IDEs for machine learning), and human-computer interaction.

  • BayesDB: BayesDB is open-source AI software that lets any programmer answer data analysis questions in seconds or minutes with a level of rigor that otherwise requires hours, days, or weeks of work by someone with good statistical judgment. This project empowers domain experts to solve problems themselves, and allows users to easily check results quality by comparing inferred relationships to common-sense knowledge and comparing model simulations to reality and expert opinion.

  • CivilServant: The CivilServant (civilservant.io) project supports online communities in running their own experiments on the effects of moderation practices, enabling them to test efforts at influencing or persuading AIs to change their behavior towards the common good, even when their training data or internal design cannot be changed.

 

Last updated

April 3, 2018