Skip to the main content

The current public debate about Artificial Intelligence techniques and technologies has been dominated by narratives that are often perilously speculative or reflective of a series of sound bites repeated in the media. Although the threats as described in the media are often overblown, there is a significant change occurring: a shift of reasoning and judgment away from people. In some circumstances that shift can create opportunities for other human pursuits and for deeper undertakings. But it can also be problematic, as it decouples big decisions from human understanding and accountability. What is both missing and very much needed in the current environment is work that identifies and cultivates technologies and practices that promote human autonomy and dignity, rather than diminish it.

Against this backdrop, there is a clear need for unbiased, sustained, evidenced-based, solution-oriented work that cuts across disciplines and sectors. And a key part of that must be bringing to the fore a diversity of voices and perspectives that are largely missing from existing debates about AI. Including more voices is necessary as societies build and deploy the next generation of uniquely powerful technologies that will have ever-expanding societal implications.

Enabled by support from the Ethics and Governance of Artificial Intelligence Fund, the Berkman Klein Center and the MIT Media Lab are excited to collaborate on a range of activities that will include a diverse array of voices and seek to discern and preserve the human element that is so often lost in highly technical AI conversations. As anchor institutions in the Fund, Berkman Klein and the Media Lab will take a lead in fostering research that applies the humanities, the social sciences, and other disciplines to the development of AI. Working in partnership with the Fund, Berkman Klein and the Media Lab will work to create an expanding global network of scholars, experts, advocates, and thinkers, whose scholarship, experimentation, energies, and impacts can be enhanced through new collaborative, interdisciplinary partnerships.  

Some of the initial activities the Berkman Klein Center will undertake with the MIT Media Lab as part of the initiative include:

  • A joint AI fellowship program, to support individuals whose work in code, ethics, philosophy, design, law, and policy contributes to our understanding of how AI systems may impact the public good. This will include international partners who have AI activities underway and are part of the global Network of Internet & Society Centers;
  • Network building, to support a diverse and impactful network of people and institutions who are working to steer AI in a way that maximizes the benefits to society;
  • Support for research projects led by AI experts working with researchers from a variety of fields and institutions that bridge scholarly approaches, including qualitative and quantitative analyses, prototyping and development, and mathematics and computation;
  • An expert panel, a group of subject experts who will serve as a braintrust, build broader networks, and identify future research opportunities;
  • A thematic focus on issues of AI for the 2018 Assembly program;
  • A joint and coordinated suite of activities that seek to enhance and sustain the human element within AI debates and developments, through case studies, education and knowledge sharing, horizon scanning and issue analysis, and experimentation and building;
  • An AI Challenges Clinic that will include a cross-institutional, multidisciplinary team that will be able to respond to emergent challenges at the intersection of AI and society, and may represent public interest “clients” in order to help them identify, develop, and deploy AI-based tools that further their public interest missions.

We welcome public engagement in the research projects taking place across the Berkman Klein Center, the MIT Media Lab and other planned research partners. Currently, we are not planning an open call for funding, but we welcome discussions with all institutions and individuals engaging in research related to developing ethical AI in the public interest. Questions can be sent to ai-questions@cyber.harvard.edu.

Please also see the FAQ for additional information.