danah boyd of Data & Society discusses how society could engage AI to reduce, rather than exacerbate, existing social challenges.
Transcript
One of the challenges of building new technologies is that we often want them to solve things that have been very socially difficult to solve. Things that we don't have answers to. Problems that we don't know how we would best go about fixing in a socially responsible way. Technology cannot fill that gap for us. In fact technology is more likely than not - and artificial intelligence in particular - to exacerbate existing challenges.
So, if we look at different issues where we have major social challenges ahead of us, whether that is in the business realm, in criminal justice and medicine, in education, we need to think hard and deep about how we want to marry technology and artificial intelligence into the broader social challenges that we're seeing with those systems.
Artificial intelligence means different things to different people. For the technical community, it's a very particular and narrow set of technologies focused on neural networks, or advanced machine learning techniques, or robotics of a particular ilk. And these kinds of techniques have been in development for an extended period of time, so most technical folks are thinking about the iterations, the evolutions, the histories of that.
For the business community, artificial intelligence has become the new buzzword, the idea of being able to do magical things with large amounts of data to solve problems that have in many ways become socially intractable, which we hope that we can solve through technical means.
For the public, AI really refers to the imagination that computers can do crazy things. And those crazy things can be both positive - solving the world's problems, computers appearing to be smart. They can also be absolutely terrifying. And usually there we refer to Hollywood concepts.
One of the most important things to do when we start to study artificial intelligence is actually bring together different frameworks of thinking. We need to think about it both technically and socially. And the main reason is because the biggest problems ahead of us are not simply technical or simply social. In fact it's the marriage between the two that becomes the most important.
For this reason we have to take different kinds of social issues very seriously. We have to really understand what's at stake, the biases of the data that are involved in making artificial intelligence function, the interpretation layers, the ways in which these systems can get manipulated. All of these social issues become critical to making certain that the technologies are done right.
The key and challenge of figuring how to think ethically is to actually think about how we want to marry different kinds of social mindsets and different kinds of technical mindsets. How do we get the technical folks to start articulating the realm of possibilities that are available to us, and what are the governance structures that we want to see in place to make certain that we can choose responsibly?
We're entering a realm where we're paying a lot of attention to cybersecurity. Where we're realizing that the security of our infrastructure can put us at risk in tremendous ways. The same will be true of artificial intelligence. But the risks that we have faced aren't necessarily about traditional hacking. They're about the manipulation of data, about data being misinterpreted in different ways, about the ways of cleaning and processing that data to do analysis, not taking into account certain social issues. And so this means we need to really think about the whole process in which we produce artificial intelligence systems.