Enabled by support from the Ethics and Governance of Artificial Intelligence Fund, the Berkman Klein Center and the MIT Media Lab are collaborating on a range of activities that include a diverse array of voices and seek to discern and preserve the human element so often lost in highly technical AI conversations. As anchor institutions of the Fund, the Berkman Klein Center and the Media Lab have taken a lead in fostering research that applies the humanities, social sciences, and other disciplines to the development of AI. Working in conjunction with the Fund, the Berkman Klein Center and the Media Lab are creating an expanding global network of scholars, experts, advocates, and thinkers, whose scholarship, experimentation, energy, and impact can be enhanced through new collaborative, interdisciplinary partnerships.
Please visit https://aiethicsinitiative.org for more information about the Ethics and Governance of Artificial Intelligence Fund; additionally, we invite you to explore this page to learn more about some of the supporters, collaborators, and friends in our work.
Related projects in the Berkman Klein Center community
Youth and Media strives to understand young people’s interactions with digital media to “ultimately shape the evolving regulatory and educational framework in a way that advances the public interest.” The Youth and Media team helping to advance our understanding of how youth think about and interact with AI through qualitative research in the form of focus groups, as well as producing learning resources that make foundational concepts in AI accessible to a wider audience. More information about these educational efforts can be found in the Additional Activities: Community and Education page.
metaLAB is an “idea foundry, knowledge-design lab, and production studio” that applies an experimental, creative, and humanities-based approach to thinking about issues related to technology. They are currently pursuing a slew of AI + Art projects and events. These have included, but are not limited to: The Future of Secrets, AI Senses, and Machine Experience (I and II). Please check out metaLAB's project page for more information.
MIT Media Lab
The Media Lab's ethics and governance of artificial intelligence announcement
Blog posts published by Joi Ito, director of the MIT Media Lab, on “AI isn’t a crystal ball, but it might be a mirror”, "Society in the Loop Artificial Intelligence" and on the "Future of Work in the Age of Artificial Intelligence"
President Barack Obama met with Joi Ito and Scott Dadich in the White House to discuss "the hope, the hypes, and the fear around AI" in an interview for Wired
A paper co-authored by Chelsea Barabas, Karthik Dinakar, Joi Ito, Madars Virza, and Jonathan Zittrain on “Interventions over Predictions: Reframing the Ethical Debate for Actuarial Risk Assessment”, presented at the FAT* 2018 conference
A paper co-authored by Iyad Rahwan, Associate Professor of Media Arts & Sciences at the MIT Media Lab, on "The social dilemma of autonomous vehicles"
Computer Science at Harvard
Among the ground-breaking projects underway in the Harvard Computer Science Department is research at the intersection of AI and economics, on the Turing test, and about autonomous systems.
Barbara J. Grosz, Higgins Professor of Natural Sciences at the Harvard School of Engineering and Applied Sciences, taught a course during the Fall 2016 semester on "Intelligent Systems: Design and Ethical Challenges".
Access Now brings together work in innovative policy, global advocacy, and direct technical support to defend and extend digital rights for people around the world. AI touches on many aspects of their work, which focuses on topics such as Business and Human Rights, Digital Security, Freedom of Expression, Net Discrimination, and Privacy. Working with Amnesty International, they launched the Toronto Declaration, which aims to “protect the rights to equality and non-discrimination in machine learning systems,” at RightsCon 2018.
The ACLU of Massachusetts is a state affiliate of the national ACLU, working to defend the principles enshrined in the Massachusetts Declaration of Rights and the U.S. Constitution. Key projects include the Justice for All project and the Technology for Liberty project, which features work on topics such as the impact of AI-enabled facial recognition technologies on privacy.
The AI Now Institute examines the social implications of AI, focusing on four core domains of rights & liberties, labor & automation, bias & inclusion, and safety & critical infrastructure.
Summaries are available from past AI Now public symposiums: the first held in July 2016 hosted by the White House and the New York University’s Information Law Institute, and the second held in July 2017 at the MIT Media Lab. The third symposium was recently held in October 2018 at the NYU Skirball Center.
AI Now produced a report on “Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability” co-authored by Dillon Reisman, Jason Schultz, Kate Crawford, and Meredith Whittaker.
Data & Society
Data & Society seeks to address complex social and cultural issues that arising from data-centric and automated technologies. Key projects and outputs include the The Intelligence and Autonomy Initiative, which is developing policy research connecting the dots between robots, algorithms and automation; an AI Pattern Language paper in which D&S presents a taxonomy of social challenges that emerged from interviews with a range of practitioners working in the intelligent systems and AI industry; the Algorithms and Publics project, which aims to map how the public sphere is currently understood, controlled, and manipulated in order to spark a richer conversation about what interventions should be considered to support the ideal of an informed and engaged citizenry.
Digital Asia Hub
The Digital Asia Hub (DAH) provides a “platform for research, knowledge sharing, and capacity building related to Internet and Society issues with focus on digital Asia.” They have released a number of articles on artificial intelligence in Asia, including reflections from the in inaugural "AI in Asia" conference, as well as the second convening focused on ethics, safety , and social impact.
The annual Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) event brings together a community of researchers and practitioners to explore how to address issues related to fairness, accountability, and transparency with computationally rigorous methods.
Global Network of Centers (NoC)
Starting in 2012, the Network of Centers (NoC), has been bringing together academic institutions with a focus on interdisciplinary research around the development, social impact, policy implications, and legal issues concerning the Internet.
The NoC co-hosted the Global Symposium on AI and Inclusion, which took place in November 2017 in Rio de Janeiro, along with the Institute for Internet and Society Rio (ITS Rio).
HRDAG: Statisticians for Human Rights
The Human Rights Data Analysis Group (HRDAG) is a “non-profit, non-partisan organization that applies rigorous science to the analysis of human rights violations around the world.” The group of scientists aims to establish accountability for human rights violations by supporting human rights projects around the world with relevant data analyses, on topics such as “Setting the Record Straight on Predictive Policing and Race.”
The Institute of Electrical and Electronic Engineers (IEEE) is the world’s largest technical professional organization dedicated to fostering technological innovation for the benefit of humanity by engaging in activities such as research, convening, and setting technology standards. One key example is Ethically Aligned Design (EAD), a global treatise with pragmatic recommendations to help stakeholders develop values-driven autonomous systems.
The International Telecommunications Union (ITU) is the UN specialized agency for information and communication technologies (ICTs). They host convenings such as the AI for Good Summit, and provides platforms for multi-stakeholder collaborations to build common understandings of emerging issues in AI, as well as research to support technical standardization and policy guidance.
Leverhulme Centre for the Future of Intelligence (CFI)
The Leverhulme Centre for the Future of Intelligence (CFI) explores the impact of artificial intelligence by building an interdisciplinary community of researchers working to ensure that we can make the most of machine intelligence. The Centre runs a series of projects exploring the nature as well as the impact of AI, and convenes meetings such as a roundtable about global rules for AI.
Meedan is a team of designers, technologists, and journalists that build tools that help journalists and news consumers to make sense of the global web. Their current work includes Check, which helps teams and online communities to collaboratively verify breaking news content, and Bridge, which supports crowdsourced translation of social media content.
Mozilla is a global community of technologists, thinkers, and builders collaborating to keep the Internet alive and accessible. Their projects, products and principles enable people to take control and explore the full potential of their lives online; for example, the Common Voice project is building an open database of voice data to “help teach machines how real people speak.” Additionally, they have recently announced a grant to support work that examines AI’s effects on society.
New America (DigiChina)
The DigiChina project at New America is a collaborative initiative working to understand the landscape of digital policymaking in China by translating and analyzing Chinese-language sources, such as excerpts from the White Paper on Artificial Intelligence Standardization with an accompanying analysis, and the New Generation AI Development Plan (AIDP).
Partnership on AI
The Partnership on AI was established to study and formulate best practices on AI technologies, advance the public's understanding of AI, and serve as an open platform for discussion and engagement about AI and its influences on people and society.
Funded by the EU's H2020 research and innovation programme, the SIENNA project develops ethical protocols and codes for human genomics, human enhancement and AI & robotics through consultations with experts, stakeholders and citizens; by convening stakeholder workshops and and citizen panels; and by conducting international ethical impact assessments and international public opinion surveys.
The World Economic Forum (WEF) bridges stakeholders from government, business, civil society, and other leaders of society to shape global, regional, and industry agendas and enable effective public-private collaboration. AI touches on their work in many of their System Initiatives that examine the impact of various “systems” that raise global challenges, including this report about ways to ensure AI and new tech work for humanity.