Skip to the main content
Designing AI to Complement Humanity
icon-community

Designing AI to Complement Humanity

Barbara Grosz of the John A. Paulson School of Engineering and Applied Sciences at Harvard University

Barbara Grosz of the John A. Paulson School of Engineering and Applied Sciences at Harvard University and the Stanford One Hundred Year Study on Artificial Intelligence (AI100) discusses how we are developing AI to complement, rather than displace, humanity.

Transcript

I think one of the things I want to say from the start is: it's not like AI is going to appear. It's actually out there. In some instances in ways that we never even notice. So for example: checking credit card usage. Predicting patients who are likely to come back into the emergency room, and therefore keeping them from going home and then having to come back. There are some very clever uses of artificial intelligence in education. But increasingly in ways in which we do notice it. For example, the various personal assistants on our phones. So it's out there making a difference, in most cases in situations where it's not replacing people but really working with people.

I stress that distinction between replacing people and complementing people because so much of the science-fiction that's out there and so much that's in the press presumes that the goal would be to replace people. But there's a perfectly wonderful way to replace human intelligence. It takes a man and a woman certain acts and you're done! And human intelligence is limited in certain ways, so why make that the aim. I mean it has fascinated people for centuries. Probably tied back to religion and people being concerned that people would try to imitate God. This is the story of The Golem, it's the story of Frankenstein, it's the story of Ex Machina.

But that's not the best way to think about developing artificial intelligence methods, nor embodying them in computer systems. Rather it would be better to complement people, as many computer systems do now. So that's the reason I make that distinction and urge it. Regardless of which two aims you pick the systems are going to - unless we just send them to Mars by themselves - they're going to exist in a world that's populated with human beings.

You can see this playing out actually in something that's been in the press a lot recently: autonomous and semi-autonomous vehicles. So for example autonomous vehicles the idea is they just drive. No person's involved in the driving at all. Semi autonomous vehicles do some driving but then shift off with people. In both cases they're interacting with people. So until we build roads on which the only vehicles are fully autonomous, the vehicles are going to have to interact with people. And even if the only vehicles are fully autonomous we have to get rid of all of the pedestrians and all of the bicycles and everything. That's the issue with fully autonomous vehicles, they will still have to interact with people.

Semi-autonomous vehicles have to take into account people's cognitive capacities in order to handle the so-called "handoff" between people and computer systems appropriately. So except in a few instances, there's no taking people out of the picture.

I think it's a much more valuable and societally useful to think from the very beginning of designing in ways to interact appropriately with people, rather than building something separate from people and then presuming people will adjust to it.

What's crucial at this point is to bring together expertise from these different fields and that that expertise has to be brought in before the systems are designed and released to the world. And now is the time to think about this. To bring together people who are experts in artificial intelligence with people who understand ethics deeply. With psychologists who understand human cognition, with social scientists who understand social organizations. So that we can, as the rubric now is, "make AI for social good." And that rubric actually covers also building systems that help low-resource communities, building systems that protect the environment, building systems that contribute to education and healthcare.

I think both that we need to train and teach people about ethics. And here I want to say I'm not talking about professional ethics. I'm talking about really understanding the trade-offs between consequentialist ideas and deontological ideas, grappling with virtue ethics, thinking about justice, thinking about who you're serving, really a deep sense of ethics and about these systems, and then make it part of the process of design of the systems.

It's a years-long process of having people from these different fields come together, explain their work, explain their perspectives to each other in ways that are accessible, treat those different perspectives with respect, and develop a common vocabulary and a way of approaching things together. That can't be short-circuited. It's really a years-long process.

You might also like