Systems harnessing AI technologies’ potential to distill insights from large amounts of data to provide greater personalization and precision are becoming increasingly ubiquitous, with applications ranging from clinical decision support to autonomous driving and predictive policing. While commonsense reasoning remains one of the holy grails of AI, there are legitimate concerns about holding autonomous systems accountable for the decisions they make as well as the ensuing consequences, both intentional or unintentional. There are many ways to hold AI systems accountable. In our work, we focus on transparency and explainability — issues related to obtaining human-intelligible and human-actionable information about the operation of autonomous systems.
Starting in the fall of 2017, we convened a cross-disciplinary working group led by Finale Doshi-Velez, a computer science professor at Harvard, along with Mason Kortz, an instructional fellow at the Cyberlaw Clinic to think about the concepts of transparency and explainability with regards to AI systems. This working group included faculty members across Harvard, as well as other fellows and researchers from the Berkman Klein Center: through our work with this working group, we have been examining the relationship between interpretability and the law, and published a foundational resource on the topic (“Accountability of AI Under the Law: The Role of Explanation”). We are continuing to think critically about the impending challenges of transparency and explainability in AI systems through collaboration in cross-disciplinary research, as well as fostering public awareness about the issue through the translation of our research findings for wider audiences.