Skip to the main content

Earlier this month, a group of researchers from Harvard and MIT directed an open letter to the Massachusetts Legislature to inform its consideration of risk assessment tools as part of ongoing criminal justice reform efforts in the Commonwealth. Risk assessment tools are pieces of software that courts use to assess the risk posed by a particular criminal defendant in a particular set of circumstances. Senate Bill 2185 — passed by the Massachusetts Senate on October 27, 2017 — mandates implementation of RA tools in the pretrial stage of criminal proceedings.

In this episode of the Berkman Klein Center podcast, The Platform, Managing Director of the Cyberlaw Clinic Professor Chris Bavitz discusses some of the concerns and opportunities related to the use of risk assessment tools as well as some of the related work the Berkman Klein Center is doing as part of the Ethics and Governance of AI initiative in partnership with the MIT Media Lab.

What need are risk assessment tools addressing? Why would we want to implement them?

Well, some people would say that they’re not addressing any need and ask why we would ever use a computer program when doing any assessments. But I think that there are some ways in which they’re helping to solve problems, particularly around consistency. Another potential piece of it, and this is where we start to get sort of controversial, is that the criminal justice system is very biased and has historically treated racial minorities and other members of marginalized groups poorly. A lot of that may stem from human biases that creep in anytime you have one human evaluating another human being. So there’s an argument to be made that if we can do risk scoring right and turn it into a relatively objective process, we might remove from judges the kind of discretion that leads to biased decisions.

Are we there yet? Can these tools eliminate bias like that?

My sense is that from a computer science perspective we’re not there. In general, these kinds of technologies that use machine learning are only as good as the data on which they’re trained. So if I’m trying to decide whether you’re going to come back for your hearing in six months, the only information that I have to train a risk scoring tool to give me a good prediction on that front is data about people like you who came through the criminal justice system in the past. And if we take as a given that the whole system is biased, then the data is that coming out of that system is biased. And when we feed that data to a computer program, the results are going to be biased.

And we don’t know what actually goes into these tools?

Many of the tools that are in use in states around the country are tools that are developed by private companies. So with most of the tools we do not have a very detailed breakdown of what factors are being considered, what relative weights are being given to each factor, that sort of thing. So one of the pushes for advocates in this area is that at the very least we need more transparency.

Tell me about the Open Letter to the Legislature. Why did you write it?

The Massachusetts Senate and House are in the process of considering criminal justice reform broadly speaking in Massachusetts. The Senate bill has some language in it that suggests that risk scoring tools should be adopted in the Commonwealth and that we should take steps to make sure that they’re not biased. And a number of us, most of whom are involved in the Berkman and MIT Media Lab AI Ethics and Governance efforts, signed onto this open letter to the Mass Legislature that basically said, “Look these kinds of tools may have a place in the system, but simply saying ‘Make sure they’re not biased’ is not enough. And if you’re going to go forward, here are a whole bunch of principles that we want you to adhere to,” basically trying to set up processes around both the procurement or development of the tool, the implementation of the tool, the training of the judges on how to use it and what the scores really mean and how they should fit into their legal analysis, and then ultimately the rigorous evaluation of the outcomes. Are these tools actually having the predictive value that was promised? How are we doing on the bias front? Does this seem to be generating results that are biased in statistically significant ways?

What are you hoping will happen next?

I think we would view part of our mission here at Berkman Klein as making sure that this is the subject of vigorous debate. Informed debate, to be clear, because I think that sometimes the debate about this devolves into either that technology is going to solve all our problems, or it’s a dystopian future with robotic judges that are going to sentence us to death, and I don’t think it’s either of those things. Having this conversation in a way that is nuanced and responsible will be really difficult, but I think it’s something we absolutely have to do.

This initiative at Berkman Klein and MIT is the Ethics and Governance of Artificial Intelligence Initiative, but there’s nothing about anything we’ve talked about here that really has to do with artificial intelligence where the computer program is learning and evolving and changing and adapting over time. But that’s coming. And the more we get used to these kinds of systems working in the criminal justice system and spitting out risk scores that judges take into account, the more comfortable we’re going to be as the computing power increases and the autonomy of these programs increases.

I don’t mean to be too dystopic about it and say that bad stuff is coming, but it’s only a matter of time. It’s happening in our cars, and it’s happening in our news feeds on social media sites. More and more decisions are being made by algorithms. And anytime we get a technological intervention in a system like this, particularly where people’s freedom is at stake, I think we want to tread really carefully, recognizing that the next iteration of this technology is going to be more extensive, and raise even more challenging questions.


Subscribe to us on Soundcloud
iTunes
or RSS

You might also like


Projects & Tools 01

Algorithms and Justice

The use of algorithms in the judiciary has already raised significant questions about bias and fairness, and looking ahead the moral questions become even more challenging.


Publications 02

Publication
Nov 9, 2017

An Open Letter to the Members of the Massachusetts Legislature Regarding the Adoption of Actuarial Risk Assessment Tools in the Criminal Justice System

This open letter — signed by Harvard and MIT-based faculty, staff, and researchers— is directed to the Massachusetts Legislature to inform its consideration of risk assessment…

Publication
Feb 9, 2018

Follow-up Letter to the Members of the Massachusetts Legislature Regarding the Adoption of Actuarial Risk Assessment Tools in the Criminal Justice System

The following open letter — signed by Harvard and MIT-based faculty, staff, and researchers — is directed to the Massachusetts Legislature to inform its consideration of risk…