The Accuracy, Fairness, and Limits of Predicting Recidivism
featuring Julia Dressel
Event Description
Algorithms for predicting recidivism are commonly used to assess a criminal defendant’s likelihood of committing a crime. Proponents of these systems argue that big data and advanced machine learning make these analyses more accurate and less biased than humans. However, our study shows that the widely used commercial risk assessment software COMPAS is no more accurate or fair than predictions made by people with little or no criminal justice expertise.
Notes from the Talk
“With the rise of big data and the prevalence of technology in everything we do, we’ve become frequent subjects of algorithms,” explained Julia Dressel, recent graduate of Dartmouth College and current software engineer. Dressel spoke about her research on the fairness and accuracy of algorithmic recidivism predictions.
In 2016, a ProPublica study showed that algorithms were predicting inflated risks of recidivism for black defendants and deflated risks of recidivism for white defendants. In response to this evidence of racism, researchers wondered what the benefits of these algorithms actually were. Dressel’s study asked whether one such software, COMPAS, was outperforming human judgments.
The study showed that COMPAS was not more accurate or objective at predicting recidivism than humans without legal expertise (recruited from Amazon Mechanical Turk). However, Dressel emphasized that the takeaway from her research should not be that humans are just as good as machines at predicting recidivism—but rather, that neither does it to a high degree of accuracy (only around 67%). Even more importantly, both humans and machines still over-predicted recidivism for black defendants and under-predicted recidivism for white defendants. The data used in the human study did not include race, so these results show that racial biases already exist in the data they used, even when race is not explicitly given as a variable.
Dressel concluded her talk with several questions for the future: What is more important: accuracy or transparency? If we cannot have a perfect predictor, what is the error rate that we, as a society, will tolerate? Who is responsible for regulating these technologies? And finally, who should be held accountable when they do not work as expected?
notes by Donica O'Malley
About Julia
Julia Dressel recently graduated from Dartmouth College, where she majored in both Computer Science and Women’s, Gender, & Sexuality Studies. She is currently a software engineer in Silicon Valley. Her interests are in the intersection of technology and bias.
Links
- Science Advances paper, "The accuracy, fairness, and limits of predicting recidivism": http://advances.sciencemag.org/content/4/1/eaao5580
- Articles written about the study:
Download original audio or video from this event.
Subscribe to the Berkman Klein events series podcast.