Skip to the main content
Past Project

AI: Transparency and Explainability

Systems harnessing AI technologies’ potential to distill insights from large amounts of data to provide greater personalization and precision are becoming increasingly ubiquitous, with applications ranging from clinical decision support to autonomous driving and predictive policing. While commonsense reasoning remains one of the holy grails of AI, there are legitimate concerns about holding autonomous systems accountable for the decisions they make as well as the ensuing consequences, both intentional or unintentional. There are many ways to hold AI systems accountable. In our work, we focus on transparency and explainability — issues related to obtaining human-intelligible and human-actionable information about the operation of autonomous systems.

Starting in the fall of 2017, we convened a cross-disciplinary working group led by Finale Doshi-Velez, a computer science professor at Harvard, along with Mason Kortz, an instructional fellow at the Cyberlaw Clinic to think about the concepts of transparency and explainability with regards to AI systems. This working group included faculty members across Harvard, as well as other fellows and researchers from the Berkman Klein Center: through our work with this working group, we have been examining the relationship between interpretability and the law, and published a foundational resource on the topic (“Accountability of AI Under the Law: The Role of Explanation”). We are continuing to think critically about the impending challenges of transparency and explainability in AI systems through collaboration in cross-disciplinary research, as well as  fostering public awareness about the issue through the translation of our research findings for wider audiences.


Our Work 06

News
Mar 20, 2018

AI is more powerful than ever. How do we hold it accountable?

Trying to understand advanced AI is like trying to understand the inner workings of another person’s mind.

Event
Mar 6, 2018 @ 12:00 PM

The Accuracy, Fairness, and Limits of Predicting Recidivism

featuring Julia Dressel

COMPAS is a software used across the country to predict who will commit future crimes. It doesn’t perform any better than untrained people who responded to an online survey.

Wired
Mar 1, 2018

The Limits of Explainability

For decades, artificial intelligence with common sense has been one of the most difficult research challenges in the field

News
Nov 27, 2017

Designing Artificial Intelligence to Explain Itself

A new working paper maps out critical starting points for thinking about explanation in AI systems.

As we integrate artificial intelligence deeper into our daily technologies, it becomes important to ask “why” not just of people, but of systems.

Publication
Nov 27, 2017

Accountability of AI Under the Law: The Role of Explanation

The paper reviews current societal, moral, and legal norms around explanations, and then focuses on the different contexts under which an explanation is currently required under…

News
Oct 2, 2017

In AI We Trust?

Do we already have the necessary trust in AI, and if not, how do we create it?


Community 03

Spectrum

Bracing Medical AI Systems for Attacks

There’s new advice on how to handle tampering that fools algorithms and enables healthcare fraud

Laying groundwork for resilience against real-world adversarial attacks

Mar 22, 2019
News

Adversarial attacks on medical AI: A health policy challenge

Emerging vulnerabilities demand new conversations

Technical solutions alone aren't enough to address vulnerabilities in machine learning systems

Mar 21, 2019
Wired

Dont' Make AI Artificially Stupid in the Name of Transparency

In this Wired op-ed, David Weinberger discusses the implications around maximizing the benefits of machine learning without sacrificing its intelligence. A longer version of the…

Jan 28, 2018

People 04

Team