Chinmayi Arun of the Centre for Communication Governance at National Law University, Delhi discusses the challenges unchecked AI development could pose to civil liberties in Asia.
In a world of conflicting values, it's going to be difficult to develop values for artificial intelligence that are not the lowest common denominator. In Asia particularly, we have a lot of countries that believe that governments are the best way to make decisions for people, and that individual human rights can be voiced only by the state. In that context, if we're trying to come up with artificial intelligence that recognizes individual human rights, and that looks to empower citizens and users, and to create transparency, it's going to be really challenging to come up with an international coordinated regime that does this.
People have developed AI that is predictive. People are researching ways to make sure that AI is able to target advertisement at people depending on their preferences, the devices they use, the routes take. Now that kind of predictive AI can very easily be used for surveillance. The states in Asia, including India, are investing very heavily in mass surveillance and creating large centralized databases that they haven't fully worked out how to sweep as yet. In India, we've also got news of the state using drones to monitor assemblies of people in public places just to make sure nothing goes wrong. We've got news that the government is developing social media labs to watch online social media to see what people are saying, and what kinds of subjects are trending. And in that context the question we're asking ourselves is: when the state chooses to use its resources to get AI to do these things, how far is AI going to be used to control and monitor the citizen, as opposed to enabling the citizen? Because in democracies like ours the balance of power between the citizens and the state is really delicate. There is a great potential for AI to tip that balance of power in favor of the state.
While it's important to make sure that we don't chill innovation, it's also important to be cautious and to make sure that technology doesn't drag us down a dark path. We've got examples from history like the Manhattan Project, like the way in which technology was used during the Holocaust, to remind us that if we're not careful about what we do with technology it can be abused in ways that we will come to deeply regret.
It's necessary to make sure not just that we have human rights, political theory, but also all the other disciplines that understand what it means to be human, and how to engage with humans involved in the designing of AI. If we don't work out a way in which citizens are able to ask the right questions about AI to ensure accountability every time AI is created and used we might be heading towards the world that Orwell predicted. And that would be really unfortunate because new technology should lead to a better world and not a more controlled world, or an unequal world.
Technology moves very quickly around the world and so it's really important to intervene in Asia at the stage of design. People sometimes have the best of intentions, but because of the way in which they're educated or the way in which they're taught to think, the way in which they design technology can end up being really damaging to the world. Conversely it could end up being really beautiful as well. And that's why it's really important that we get into AI right now and help the people that are designing it think of it in a way that imagines a better world.