Skip to the main content

Get To Know 22-23 BKC Fellow: Petra Molnar

Petra Molnar is a lawyer and anthropologist who examines how technology impacts migration and human rights at global borders. She is Associate Director of the Refugee Law Lab at York University and runs the Migration and Technology Monitor, an archive and community that interrogates technological experiments on people crossing borders. Her book, The Walls Have Eyes: Surviving Migration in the Era of Artificial Intelligence (The New Press), a series of stories and vignettes on migration journeys impacted by technology, is coming out in early 2024! She was a 2022-2023 Fellow at the Berkman Klein Center.  

 

What do you see as the most concerning technological threat to migration? 

That’s a tough question because part of me wants to say all of it! I think it’s the environment—an ecosystem of experimentation that's happening with very little law and virtually no governance mechanisms. For me, what's also concerning are the sharpest edges of this: the automatic surveillance, predictive analytics used for pushbacks, robo-dogs, AI lie detectors––all of these things that we would not be okay with happening in other spaces but are somehow okay in immigration spaces, like borders and refugee camps. I'm concerned with that, but I think there are also a lot of the less prominent areas that are super important, like visa triaging algorithms, voice printing technologies, and other technologies that are a little more obfuscated in the system. We need to pay attention to all of it—and most importantly to the impacts of technologies on real people who are on the move. 

PetraMolnar1

What challenges have you had to overcome in your research? Have you ever faced resistance or opposition from governments or other institutions? 

One challenge is transparency and the fact that it's very difficult to know what’s happening “under the lid.” For example, for [the 2018 Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System report], part of the methodology was putting out 27 separate [Access to Information Act] requests to the Canadian government. Just last month, I received an email from one of the federal agencies saying, “Do you still want this information from 2018?” It’s now 2023! So, part of it is the lack of transparency: knowing what's happening, what some of the projects are, and who some of the actors are.  

We're dealing with an area full of techno-solutionism, where governments have signed on to the idea that more tech is better, and more data is better. Then, when you try and say we should slow down and look at the impacts on real people, sometimes these powerful actors think you're either fearmongering or you're stifling innovation. To that I say, yeah, maybe some innovations should be stifled. There should be some red lines around things like the use of robo-dogs, AI lie detectors, autonomous weapons, predictive policing, or algorithms used for welfare. We've skipped a few steps as a society where we haven't had these conversations collectively together. 

When you try and raise these concerns, it can get uncomfortable because there's no incentive for states and the private sector to stop this or to regulate this because developing border technologies is very lucrative. There's a lot of money to be made in these experiments. So, when you are the naysayer, it then becomes very difficult to get access to information or to have access to people working within these spaces. 

PetraMolnar2

How do you envision the future of technology and human rights evolving in the next five to ten years? Do you see any paths leading away from the use of oppressive technology in immigration? 

Well, I'll give you the pessimistic answer first, then I'll give you the optimistic answer. Normally I’m an optimist, but I have to say the last few years, because I’ve been doing so much work on the ground and looking at what this is doing to people, it’s hard to be optimistic. The starting premise is that more technology is better, and technology will solve our problems. And what’s a major problem? “Migration.” That's in quotes, of course—that's the way states like to position it, “Migrants are a problem; refugees are a threat. They must be controlled. How do you do it through technology?” That's the kind of feedback loop that we're dealing with. So, it's very hard to change hearts and minds or talk to policymakers if that is what we're predicating the whole system on. 

The optimist in me though says there's also still a lot of creativity, contestation, joy, and work that is being done from the ground up by people who are affected themselves, who are part of mobile communities. One small example is the fellowship that we're running with the Migration and Technology Monitor. We made a choice to keep it very small and grounded and meaningfully support five people who are in current situations of displacement, who are doing their own work on border surveillance rather than us imposing what a project looks like on them. From Venezuela to Mexico to Uganda to Nepal to Malaysia, our fellows are looking into impacts of surveillance at the US-Mexico border, the tracking of migrant workers in the Middle East, as well as developing their own tools to upend some of the vast power differentials in this space. They are the experts, and they need to be in the driver's seat! That gives me a lot of hope because I think it also humanizes the issue. Perhaps that's how we can move the dial on some of the impacts of technology at the border. 

PetraMolnar3

Interviewer - Sebastian Rodriguez

Sebastian Rodriguez is a student at the University of Toronto’s Faculty of Information. His research explores how technology influences and shapes society, particularly through the lenses of human-computer interaction and surveillance. During the summer of 2023, Sebastian interned with metaLAB, where he explored critical and creative approaches to teaching with AI.