Skip to the main content
Openness and Oversight of Artificial Intelligence
icon-community

Openness and Oversight of Artificial Intelligence

Jonathan Zittrain, faculty director of the Berkman Klein Center and Professor at Harvard Law School

Jonathan Zittrain, faculty director of the Berkman Klein Center and Professor at Harvard Law School discusses the role regulators and oversight groups might play as AI becomes more and more woven into the fabric of society.

 

Transcript

"Artificial Intelligence" is one label for it. But another label for it is: just forms of systems that evolve under their own rules in ways that might be unexpected even to the creator of those systems, that will be used in some way to substitute for human agency.

And that substitution for human agency might be something that is quite autonomy-enhancing for humans, individually or in groups; if you have a system that can worry about stuff that you don't have to worry about anymore you can turn your attention to other possibly more interesting or important issues.

On the other hand, if you're consigning to a system agenda-setting power, decision-making power - again, either individually or in a group - that may really carry with it consequences that people aren't so much keeping an eye on.

I think that's creating some of the discomfort we see right now with the pace at which AI is growing and applications of machine learning and other systems that can develop under their own steam.

These are the sorts of things that give us some pause.

And I think about the provision of government services, or decisions that are uniquely often made by governments, such as under what circumstances somebody should get bail, and how much the bail should be set at, whether somebody should be paroled from prison, how long should a sentence be. These are things we usually consigned to human actors - judges - but those judges are subject to their own biases and fallibilities and inconsistencies. And there is now an opportunity to start thinking about what would it mean - equal protection under the law - to treat similar people similarly. And machines could either be quite helpful with that in double-checking the way in which our cohort of judges is behaving. It could also be I think an unfortunate example of "set it and forget it" and biases could creep in, and often in unexpected ways or circumstances that really will require some form of oversight.

All of these systems not only have their own outputs and dependencies and people that they affect. They may also be interacting with other systems. And that can end up with unexpected results and quite possibly counterintuitive ones. We have had for many, many years for the functions in society undertaken by professionals where the professionals are the most empowered able to really affect other people's lives, we often have them organized into a formal profession, even with a guild that you need special qualifications to join. There are professional ethics independent of what you agree to do for a customer or a client.

Now I don't know if AI is ready for that. I don't know that we would want to restrict somebody in a garage from experimenting with some cool code and neat data in doing things. At the same time when that data gets spun up and it starts affecting millions or tens of millions of people, it's not clear that we still want it to behave as if it's just a cool project in a garage.

Interestingly, academia in huge part gave us the internet which in turn has been the gift that keeps on giving. And so many features of the way the internet was designed and continues to operate reflect the values of academia that have to do with an openness to contribution from nearly anywhere. An understanding that we should try things out and have things sink or swim on their reception, rather than trying to handicap ahead of time what exactly is going to work tightly controlled by one firm or a handful. These are all reflected in the internet. And for AI I think there's a similar desire to be welcoming to as many different ways of implementing and refining a remarkable tool set that has developed in just a few years, and the corresponding reams of data that can be used and that in turn can go from innocuous to quite sensitive in just one flop.

To be able to have academia, not just playing a meaningful role, but central to these efforts, strikes me as an important societal hedge against what otherwise can be the propriatization of some of the best technologies, and our inability to understand how they're doing what they do. Because often we don't know what we don't know. Able even to suggest design changes or tweaks. And then compare it rigorously against some set of criteria, that are criteria that in turn can be debated about what makes for a better society, what is helping humanity, what is respecting dignity and autonomy. And those are questions that we may never fully settle, but we may have a sense on the spectrum of which is pushing things in one direction or another.

If we didn't have academia playing a role, it might just be a traditional private arms race. And we could find that "gosh somehow this magic box does some cool thing offered by name your company we don't really know how it works and because it's a robot it's never going to quit his job and move to another company and spread that knowledge or retire and teach."

These are the sorts of things that over the medium to longer term mean that having a meaningful open project that really develops this next round of technology in the kind of open manner in which the internet was developed and is often healthily criticized and refined - that's what we should be aiming for for AI.

You might also like


Projects & Tools 01

Past

AI: Global Governance and Inclusion

In a world challenged by growing domestic and international inequalities, policymakers face hard problems and difficult choices when dealing with AI systems.