Core Use Cases and Cross-Cutting Themes
During our first phase of engagement and research, the Berkman Klein Center and the MIT Media Lab have identified a few substantive areas to concentrate on that reflect our response to our guiding motivations: involving the right change agents, playing to the unique skills and positions of those involved, and having an achievable theory of change. The core use cases that have crystallized are autonomous vehicles, social and criminal justice, and media and information quality.
Autonomous Vehicles: Americans spend over 290 hours per year driving, making it one of the most common (and dangerous) points of human-machine interaction. As vehicles are increasingly automated, it becomes not only imperative to improve these human-machine interactions, but also to anticipate what this means for the future of labor, explore how these vehicles will push the limits of existing governance frameworks as they drive across geographic boundaries, understand how autonomous vehicles may reinforce existing biases through the inability to drive through unmapped areas such as the poorest neighborhoods in the global south, and identify the necessary forms of transparency needed to build new accident liability regimes. The challenges are clear, but the solutions have to date proven elusive due to a lack of coordination across the numerous stakeholders.
The Berkman Klein Center and the Media Lab plan to convene automotive companies, regulators, engineers, ethicists, and civil society organizations to collectively develop solutions to these challenges. One way that we are doing this is through the Society in the Loop project, which aims to build models of AI’s social dynamics, inform policy recommendations to shape those dynamics, and engage policymakers and the public via outreach and novel interactive tools. Additionally, the Cyberlaw Clinic at Harvard Law School is working with law students to provide AI-related legal guidance to nonprofits and public interest-oriented start ups facing unique AI legal challenges, such as those associated with navigating the complex regulatory and unusual liability risks that accompany autonomous vehicles.
Contact: Joi Ito
Social and Criminal Justice: From forensic analysis, setting bail, sentencing, and parole, algorithms are increasingly aiding law enforcement and judges in carrying out their oaths. The use of these algorithms has already raised significant questions about bias and fairness, and looking ahead the moral questions become even more challenging. As these tools become embedded in criminal prosecutions, what levels of transparency are necessary to meet our obligations of due process and cross examination? As these tools become more effective, to what extent will we substitute algorithmic judgment for human judgment? As these same tools of algorithmic judgment become commonplace across the public and private sector more generally, what will it mean for our conceptions of fairness and social justice in everything from loan and welfare eligibility to college admissions?
As institutions with wide-ranging interdisciplinary networks and strong convening power, the Berkman Klein Center and the Media Lab are bringing together judges, law enforcement agencies, and technologists among others to develop best practices for the use of these technologies in the judiciary. We are organizing a series of AI Challenges Forums on emerging AI issues in the criminal justice system with assistant state attorneys general and risk assessment tools in the judiciary. Moreover, we are building courses based at Harvard Law School that will help train future generations of lawyers to respond to the legal, policy, and social challenges presented by AI.
Contact: Chris Bavitz
Media and Information Quality: Media, journalism, and information quality play a central role in promoting healthy democratic societies. An increased reliance on digital platforms for news—digital platforms that are themselves increasingly reliant on AI to select the content we see—has raised significant concerns about the role that autonomous systems are playing in influencing human judgment, opinions, perceptions, and even election outcomes. Addressing these challenges requires grappling with difficult governance questions pitting free speech against government or private party regulation. It also raises significant questions about the role of transparency in these influential algorithms and whether such transparency may actually create opportunities for solutions that enable a competitive market for news-feed algorithms, customized to the needs of each user.
The Berkman Klein Center and the Media Lab have deep experience in studying the impacts of the networked public sphere. For example, Media Cloud, a joint project of the Berkman Klein Center and the Center for Civic Media at MIT, is an open source platform that tracks millions of stories published online and uses empirical approaches to study the media ecosystem; recently, Media Cloud completed a study of the media ecosystem during the 2016 US election in order to better understand AI’s role in propagating or addressing the problem of disinformation online. Building off of tools like Media Cloud and BayesDB, an open-source platform that democratizes data science through the use of AI, we will further our understanding of the impact of AI on the media and deepen our partnerships with the private sector to respond to this impact.
Contact: Jonathan Zittrain
What has become apparent in our exploration of these three core use cases is a series of common threads or cross-cutting themes that unite them. Each of the above areas elicits questions in the following realms:
Global Governance and the ways in which existing national and international governance institutions may be challenged to respond to the fast-paced and transboundary applications of AI.
Diversity and Inclusion and the ways in which the use of AI may reinforce existing biases, particularly against those in underserved or underrepresented populations and with a focus on the largely under-researched and under-explored international aspects of these challenges.
Transparency and Explanation and the challenges of obtaining human-intelligible and human-actionable information about the operation of autonomous systems.
We are illuminating these central themes and questions throughout our research, community building, and educational efforts, including in the AI Global Dialogue Series and upcoming AI and Inclusion Symposium in Brazil co-hosted on behalf of the global Network of Internet & Society Centers (NoC) by the Institute for Technology and Society of Rio de Janeiro (ITS Rio) and the Berkman Klein Center.
Contact: Urs Gasser