Denoising and Discretion: AI Support for Normative Decisions
Spring Speaker Series
Spring Speaker Series
Many decisions require some kind of personal discretion: Was a workplace accident due to negligence? Should a particular person be deported? In these cases, we are inclined to preserve human agency in order to fully consider the situation’s context (discretion); at the same time, we usually don't want people making capricious choices (denoise).
In this talk, computer scientist Finale Doshi-Velez and Cyberlaw Clinic lawyer Mason Kortz will discuss the challenges and opportunities for designing AIs that help us make decisions in these normative contexts, as opposed to the predictive settings in which we have assumed that AI is helping in a predictive or objective setting. In many cases it's not -- and thus our support is not the right type. The session will offer interactive scenarios to discover what helps us adhere to due process without being manipulated.
Speakers
Finale Doshi-Velez is a Herchel Smith Professor in Computer Science at the Harvard Paulson School of Engineering and Applied Sciences. She completed her MSc from the University of Cambridge as a Marshall Scholar, her PhD from MIT, and her postdoc at Harvard Medical School. Her interests lie at the intersection of machine learning, healthcare, and interpretability.
Mason Kortz is a clinical instructor at the Harvard Law School Cyberlaw Clinic at the Berkman Klein Center for Internet & Society, where he has worked since January 2017. There, he draws on both his legal training and his background as a software and database developer to bring a technical perspective to issues such as civil rights, government transparency, and police oversight. He is also active in the emerging area of the law of artificial intelligence and algorithms, and has written and presented on the impact of algorithmic decision making on areas as diverse as intellectual property, products liability, and the criminal legal system.