Skip to the main content
Project

AI: Transparency and Explainability

Systems harnessing AI technologies’ potential to distill insights from large amounts of data to provide greater personalization and precision are becoming increasingly ubiquitous, with applications ranging from clinical decision support to autonomous driving and predictive policing. While commonsense reasoning remains one of the holy grails of AI, there are legitimate concerns about holding autonomous systems accountable for the decisions they make as well as the ensuing consequences, both intentional or unintentional. There are many ways to hold AI systems accountable. In our work, we focus on transparency and explainability — issues related to obtaining human-intelligible and human-actionable information about the operation of autonomous systems.

Starting in the fall of 2017, we convened a cross-disciplinary working group led by Finale Doshi-Velez, a computer science professor at Harvard, along with Mason Kortz, an instructional fellow at the Cyberlaw Clinic to think about the concepts of transparency and explainability with regards to AI systems. This working group included faculty members across Harvard, as well as other fellows and researchers from the Berkman Klein Center: through our work with this working group, we have been examining the relationship between interpretability and the law, and published a foundational resource on the topic (“Accountability of AI Under the Law: The Role of Explanation”). We are continuing to think critically about the impending challenges of transparency and explainability in AI systems through collaboration in cross-disciplinary research, as well as  fostering public awareness about the issue through the translation of our research findings for wider audiences.


Our Work 04

News

Adversarial attacks on medical AI: A health policy challenge

Emerging vulnerabilities demand new conversations

Technical solutions alone aren't enough to address vulnerabilities in machine learning systems

Mar 21, 2019
News
Mar 20, 2018

AI is more powerful than ever. How do we hold it accountable?

Trying to understand advanced AI is like trying to understand the inner workings of another person’s mind. More

Event
Mar 6, 2018 @ 12:00 PM

The Accuracy, Fairness, and Limits of Predicting Recidivism

featuring Julia Dressel

COMPAS is a software used across the country to predict who will commit future crimes. It doesn’t perform any better than untrained people who responded to an online survey. More

Publication
Nov 27, 2017

Accountability of AI Under the Law: The Role of Explanation

The paper reviews current societal, moral, and legal norms around explanations, and then focuses on the different contexts under which an explanation is currently required under… More


Community 01

News

Adversarial attacks on medical AI: A health policy challenge

Emerging vulnerabilities demand new conversations

Technical solutions alone aren't enough to address vulnerabilities in machine learning systems

Mar 21, 2019

People 04

Team