Skip to the main content
Designing Artificial Intelligence to Explain Itself

Designing Artificial Intelligence to Explain Itself

A new working paper maps out critical starting points for thinking about explanation in AI systems.

“Why did you do that?” The right to ask that deceptively simple question and expect an answer creates a social dynamic of interpersonal accountability. Accountability, in turn, is the foundation of many important social institutions, from personal and professional trust to legal liability to governmental legitimacy and beyond.

As we integrate artificial intelligence deeper into our daily technologies, it becomes important to ask “why” not just of people, but of systems. However, human and artificial intelligences are not interchangeable. Designing an AI system to provide accurate, meaningful, human-readable explanations presents practical challenges, and our responses to those challenges may have far-reaching consequences. Setting guidelines for AI-generated explanations today will help us understand and manage increasingly complex systems in the future.

In response to these emerging questions, a new working paper from the Berkman Klein Center at Harvard University and the MIT Media Lab maps out critical starting points for thinking about explanation in AI systems. “Accountability of AI Under the Law: The Role of Explanation” is now available to scholars, policy makers, and the public.

“If we’re going to take advantage of all that AIs have to offer, we’re going to have to find ways to hold them accountable,” said Finale Doshi-Velez of Harvard’s John A. Paulson School of Engineering and Applied Sciences, “Explanation is one tool toward that end.  We see a complex balance of costs and benefits, social norms, and more. To ground our discussion in concrete terms, we looked to ways that explanation currently functions in law.”

Doshi-Velez and Mason Kortz of the Berkman Klein Center and Harvard Law School Cyberlaw Clinic are lead authors of the paper, which is the product of an extensive collaboration within the Ethics and Governance of Artificial Intelligence Initiative, now underway at Harvard and MIT.

“An explanation, as we use the term in this paper, is a reason or justification for a specific decision made by an AI system--how a particular set of inputs lead to a particular outcome,” said Kortz. “A helpful explanation will tell you something about this process, such as the degree to which an input influenced the outcome, whether changing a certain factor would have changed the decision, or why two similar-looking cases turned out differently.”

The paper reviews current societal, moral, and legal norms around explanations, and then focuses on the different contexts under which an explanation is currently required under the law. It ultimately finds that, at least for now, AI systems can and should be held to a similar standard of explanation as humans currently are.

“It won’t necessarily be easy to produce explanations from complex AI systems that are processing enormous amounts of data,” Kortz added. “Humans are naturally able to describe our internal processes in terms of cause and effect, although not always with great accuracy. AIs, on the other hand, will have to be intentionally designed with the capacity to generate explanations in mind. This paper is the starting point for a series of discussions that will be increasingly important in the years ahead. We’re hoping this generates some constructive feedback from inside and outside the Initiative.”

Guided by the Berkman Klein Center at Harvard and the MIT Media Lab, the Ethics and Governance of Artificial Intelligence Initiative aims to foster global conversations among scholars, experts, advocates, and leaders from a range of industries. By developing a shared framework to address urgent questions surrounding AI, the Initiative aims to help public and private decision-makers understand and plan for the effective use of AI systems for the public good. More information at: https://cyber.harvard.edu/research/ai

You might also like


Projects & Tools 01

Past

AI: Transparency and Explainability

There are many ways to hold AI systems accountable. We focus on issues related to obtaining human-intelligible and human-actionable information.


Publications 01

Publication
Nov 27, 2017

Accountability of AI Under the Law: The Role of Explanation

The paper reviews current societal, moral, and legal norms around explanations, and then focuses on the different contexts under which an explanation is currently required under…