Skip to the main content
The Accuracy, Fairness, and Limits of Predicting Recidivism
Luncheon Series

The Accuracy, Fairness, and Limits of Predicting Recidivism

featuring Julia Dressel

Event Description

Algorithms for predicting recidivism are commonly used to assess a criminal defendant’s likelihood of committing a crime. Proponents of these systems argue that big data and advanced machine learning make these analyses more accurate and less biased than humans. However, our study shows that the widely used commercial risk assessment software COMPAS is no more accurate or fair than predictions made by people with little or no criminal justice expertise.

This event is supported by the Ethics and Governance of Artificial Intelligence Initiative at the Berkman Klein Center for Internet & Society. In conjunction with the MIT Media Lab, the Initiative is developing activities, research, and tools to ensure that fast-advancing AI serves the public good. Learn more at https://cyber.harvard.edu/research/ai.
 

Notes from the Talk

“With the rise of big data and the prevalence of technology in everything we do, we’ve become frequent subjects of algorithms,” explained Julia Dressel, recent graduate of Dartmouth College and current software engineer. Dressel spoke about her research on the fairness and accuracy of algorithmic recidivism predictions.

In 2016, a ProPublica study showed that algorithms were predicting inflated risks of recidivism for black defendants and deflated risks of recidivism for white defendants. In response to this evidence of racism, researchers wondered what the benefits of these algorithms actually were. Dressel’s study asked whether one such software, COMPAS, was outperforming human judgments.

The study showed that COMPAS was not more accurate or objective at predicting recidivism than humans without legal expertise (recruited from Amazon Mechanical Turk). However, Dressel emphasized that the takeaway from her research should not be that humans are just as good as machines at predicting recidivism—but rather, that neither does it to a high degree of accuracy (only around 67%). Even more importantly, both humans and machines still over-predicted recidivism for black defendants and under-predicted recidivism for white defendants. The data used in the human study did not include race, so these results show that racial biases already exist in the data they used, even when race is not explicitly given as a variable.

Dressel concluded her talk with several questions for the future: What is more important: accuracy or transparency? If we cannot have a perfect predictor, what is the error rate that we, as a society, will tolerate? Who is responsible for regulating these technologies? And finally, who should be held accountable when they do not work as expected?

notes by Donica O'Malley

About Julia

Julia Dressel recently graduated from Dartmouth College, where she majored in both Computer Science and Women’s, Gender, & Sexuality Studies. She is currently a software engineer in Silicon Valley. Her interests are in the intersection of technology and bias. 

Links

Download original audio or video from this event.

Subscribe to the Berkman Klein events series podcast.

Past Event
Mar 6, 2018
Time
12:00 PM - 1:15 PM

You might also like


Events 01

Event Series

Luncheon Series

The Berkman Klein Center Luncheon Series is a weekly forum for conversations about Internet issues and research. It is free and open to the public.


Projects & Tools 02

Algorithms and Justice

The use of algorithms in the judiciary has already raised significant questions about bias and fairness, and looking ahead the moral questions become even more challenging.

Past

AI: Transparency and Explainability

There are many ways to hold AI systems accountable. We focus on issues related to obtaining human-intelligible and human-actionable information.