Skip to the main content

AI: Transparency and Explainability

Systems harnessing AI technologies’ potential to distill insights from large amounts of data to provide greater personalization and precision are becoming increasingly ubiquitous, with applications ranging from clinical decision support to autonomous driving and predictive policing. While commonsense reasoning remains one of the holy grails of AI, there are legitimate concerns about holding autonomous systems accountable for the decisions they make as well as the ensuing consequences, both intentional or unintentional. There are many ways to hold AI systems accountable. In our work, we focus on transparency and explainability — issues related to obtaining human-intelligible and human-actionable information about the operation of autonomous systems.

Starting in the fall of 2017, we convened a cross-disciplinary working group led by Finale Doshi-Velez, a computer science professor at Harvard, along with Mason Kortz, an instructional fellow at the Cyberlaw Clinic to think about the concepts of transparency and explainability with regards to AI systems. This working group included faculty members across Harvard, as well as other fellows and researchers from the Berkman Klein Center: through our work with this working group, we have been examining the relationship between interpretability and the law, and published a foundational resource on the topic (“Accountability of AI Under the Law: The Role of Explanation”). We are continuing to think critically about the impending challenges of transparency and explainability in AI systems through collaboration in cross-disciplinary research, as well as  fostering public awareness about the issue through the translation of our research findings for wider audiences.

Publications 01

Monday, Nov 27, 2017

Accountability of AI Under the Law: The Role of Explanation

The paper reviews current societal, moral, and legal norms around explanations, and then focuses on the different contexts under which an explanation is currently required under… More

News 04

Washington Post
Tuesday, Mar 20, 2018

AI is more powerful than ever. How do we hold it accountable?

Trying to understand advanced AI is like trying to understand the inner workings of another person’s mind. More

Thursday, Mar 1, 2018

The Limits of Explainability

For decades, artificial intelligence with common sense has been one of the most difficult research challenges in the field More

Monday, Nov 27, 2017

Designing Artificial Intelligence to Explain Itself

A new working paper maps out critical starting points for thinking about explanation in AI systems.

As we integrate artificial intelligence deeper into our daily technologies, it becomes important to ask “why” not just of people, but of systems. More

Monday, Oct 2, 2017

In AI We Trust?

Do we already have the necessary trust in AI, and if not, how do we create it? More

Community 01


Dont' Make AI Artificially Stupid in the Name of Transparency

In this Wired op-ed, David Weinberger discusses the implications around…

Sunday, Jan 28, 2018

Events 01

Mar 6, 2018 @ 12:00 PM

The Accuracy, Fairness, and Limits of Predicting Recidivism

featuring Julia Dressel

COMPAS is a software used across the country to predict who will commit future crimes. It doesn’t perform any better than untrained people who responded to an online survey. More

People 04