The paper reviews current societal, moral, and legal norms around explanations, and then focuses on the different contexts under which an explanation is currently required under the law. It ultimately finds that, at least for now, AI systems can and should be held to a similar standard of explanation as humans currently are.
Ethics and Governance of Artificial Intelligence
The development, application, and capabilities of AI-based systems are evolving rapidly, leaving largely unanswered a broad range of important short- and long-term questions related to the social impact, governance, and ethical implementations of these technologies and practices. The Berkman Klein Center and the MIT Media Lab, as anchor institutions of the Ethics and Governance of Artificial Intelligence Fund, are conducting evidence-based research to provide guidance to decision-makers in the private and public sectors, and to engage in impact-oriented pilot projects to bolster the use of AI for the public good, while also building an institutional knowledge base on the ethics and governance of AI, fostering human capacity, and strengthening interfaces with industry and policy-makers.
Our efforts include a range of research sprints and pilots, community building efforts, and education, training, and outreach activities. Going forward, core use cases include autonomous vehicles, criminal and social justice, and media and information quality, and will be examined through the lenses of cross-cutting themes, including global governance, diversity and inclusion, and transparency and explanation.