Mar 6 2018 12:00pm to Mar 6 2018 12:00pm
The Accuracy, Fairness, and Limits of Predicting Recidivism
featuring Julia Dressel
Tuesday, March 6, 2018 at 12:00 pm
Harvard Law School campus
Pound Hall, Ballantine Classroom
RSVP required to attend in person
Event will be live webcast at 12:00 pm
Algorithms for predicting recidivism are commonly used to assess a criminal defendant’s likelihood of committing a crime. Proponents of these systems argue that big data and advanced machine learning make these analyses more accurate and less biased than humans. However, our study shows that the widely used commercial risk assessment software COMPAS is no more accurate or fair than predictions made by people with little or no criminal justice expertise.
This event is supported by the Ethics and Governance of Artificial Intelligence Initiative at the Berkman Klein Center for Internet & Society. In conjunction with the MIT Media Lab, the Initiative is developing activities, research, and tools to ensure that fast-advancing AI serves the public good. Learn more at https://cyber.harvard.edu/research/ai.
Julia Dressel recently graduated from Dartmouth College, where she majored in both Computer Science and Women’s, Gender, & Sexuality Studies. She is currently a software engineer in Silicon Valley. Her interests are in the intersection of technology and bias.
- Science Advances paper, "The accuracy, fairness, and limits of predicting recidivism": http://advances.sciencemag.org/content/4/1/eaao5580
Articles written about the study: