Skip to the main content

#FellowFriday! Get to know the 2017-2018 Fellows

This series of short video interviews highlights the new 2017-2018 Berkman Klein fellows.

 

Published Friday, January 26

Soroush Vosoughi is creating algorithms that can track and counter the spread of misinformation on social networks.

Tell us about a research question you’re excited to address this year and why it matters to you.

The question I'm really interested in and what I'm working on as a fellow at Berkman and also as a postdoc at MIT Media Lab is interventions to dampen the effects of misinformation on social media. My PhD focused on automatic detection of rumors on social media.

Right now I'm interested in intervention strategies, so one idea I have is maybe an automated tool like a bot on Twitter and on Facebook that would detect misinformation using the algorithm I developed for my thesis. And then contact people who are on the path of the misinformation to let them know that they might be exposed to this thing. Kind of vaccinating them before they're actually exposed to the virus of misinformation. Now I think it's become pretty obvious that rumors and misinformation in different domains are super important, and damaging to society, specifically rumors in the political domain. They undermine the core democratic values of our society, because if you don't have a shared truth with the other people who are voting in the same election as you then you're not judging the candidates based on the same facts.

What excites you or scares you the most about technology and its potential impact on our world?

I think technology is always neutral, almost always neutral. So it can be used for good or evil. What excites me, and what makes me fearful is actually the same technology, which is specifically recent advances in deep neural networks. A lot of the problems in classical AI have already been solved using this new method in the last decade, so problems that we thought we would not be able to solve in a century we've already solved. So that's really exciting. But again, the same algorithms and systems that we've used to solve these problems, they're big black boxes. We never know exactly what goes in them. And so if you give them too much power to govern our society, they might actually make decisions that we would never understand, and that we'll never be able to interpret — and that scares me.

***

Published December 15

Doaa Abu-Elyounes  is a doctoral student at Harvard Law School studying how judges are beginning to use AI and algorithms in the courtroom.

Tell us about a research question you’re excited to address this year and why it matters to you. 

In the criminal justice system judges are using algorithms ("risk assessment tools" they call it) in order to determine how risky defendants are. I'm focusing on the pre-trial stage, which is the first stage that defendants encounter with judges. They need to decide whether to keep them in jail, to wait for the end of the trial in jail, or to release them with conditions, or without conditions. The tools that are being used now are based on regression analysis, and I'm trying to estimate the impact of artificial intelligence on these tools. What makes me passionate is to improve our broken criminal justice, and to try to see how we can benefit from this increasingly emerging technology, and maybe to be a little bit more just.

What excites you the most about technology and its potential impact on our world?

I'm hopeful about the future of technology. Technology is getting better and better. I'm a blind person, and it definitely changed my life, so I'm hopeful that it's going to change others' lives. People who maybe made a mistake in their lives, they still deserve a fair, due process. It's the law. And I'm hoping that technology will help us reach that goal faster.

***

Luke Stark is a post-doc in sociology at Dartmouth College who explores how psychological techniques are incorporated into social media platforms, mobile apps, & AI systems.

Tell us about a research question you’re excited to address this year and why it matters to you. 

One research question I'm really interested in this year is how we think about expanding the ethical horizons of science and technology, especially around STEM education, computer science, and engineering. I think it's important because these technologies that get designed and built by computer scientists and engineers have a huge impact on our world. And they have a lot of say in how social life gets organized these days. I think it's important for all of us to understand the social and ethical implications of those technologies.

How did you become interested in this topic?

I remember when I was in my early 20s, I had a job working for the government of Ontario in communication. And I sort of realized that the politicians were really interested in how newspapers got laid out. They cared about where the headline was on the page, how many column inches they got. And it just underscored to me the truth of Marshall MacLuhan's axiom about the medium being the message. That the newspaper, which is also kind of a technology was important to the way these politicians' values and messages got out. That really helped spark my interest. I think in the last year or so there have been so many stories in the news about why the ethics of technology are important. Debates about "fake news," about persuasion, about the way that social media shaped electoral politics and that kind of thing. I think a lot of people realized the importance of these questions in their every day digital media use before 2017. But I think it's really hard to ignore those things now.

***

Published December 8

Pritha Chatterjee is researching the privacy and public health implications of India's new universal ID system.

Tell us about a research question you’re excited to address this year.

I am looking at how population health can be improved with the use of technology, in particular in low and middle income countries and "disadvantaged" populations in high-income countries. I am looking especially at maternal health outcomes.

What in particular are you looking at with regard to technology and health in India?

We have this universal identification system in India called Aadhaar, which is being linked to track people, and that has potentially a lot of use in public health. So for example, with our tuberculosis program, the financial assistance that is provided is being linked through [the ID system]. The privacy implications of this are really huge. so I guess, what technology can do, we should also be wary of those very same things at the same time so that balance is -- I don't know how we are going to find it. I'm working on it myself. So the potential is huge, but if you say like in a country like India that you're not going to provide the services if a person does not have that ID yet, that is a problem, because implementation is a huge challenge. Secondly, the privacy part of it really scares me, because you're linking all sorts of data through this one ID, and the government has access to all of it. I don't think enough is being done to address that, or even research on how to mitigate those concerns. Like how can you use the technology for the good, but also reassure citizens? There should be a mechanism to protect the privacy of citizens, and I don't think enough is being done on that front yet.  

***

Published December 8

Chien-Kuan Ho is a prosecutor who researches cybercrime and the new challenges posed by digital anonymity and encryption.

Tell us about a research question you’re excited to address this year and why it matters to you.

This year my research will primarily focus on how to more effectively investigate cyber crime. With the development of technology, many criminals may use new cyber tools to commit a crime, such as mobile malware, and ATM fraud, et cetera. The growth of cybercrime remains a great threat to security in our world. Therefore, law enforcement authorities have to improve their capability to investigate cybercrime more effectively.

Why should people care about this issue?

With the massive use of the technology of the Internet, everyone could be a potential victim of this technology. In our world, the reality is that everyone who is connected to the Internet is vulnerable to cyber attack. It's not only big companies that are under threat. Individuals who don't think they have much to offer the hackers can be also targeted. So even if you don't think you are a big target, you should still care for this risk.

What excites you the most about technology and its potential impact on our world? What scares you the most?

Modern technology is certainly fascinating. Social networks have allowed us to share almost anything, anytime, anywhere. Smartphones and the Internet have dramatically changed the way we communicate. But criminals may also use these technologies to commit crimes. What scares me the most about technology is the increasing misuse of anonymity and encryption services on the Internet has become a critical impediment of the investigation and the prosecution of criminals. If law enforcement cannot keep up with the progress of technology, our world may become a paradise for criminals.

***

published December 1

james Wahutu studies the impact media reporting on mass atrocities has on our understanding of human rights, collective memory, and cross-cultural exchange.

Tell us about a research question you’re excited to address this year and why it matters to you.
I'm interested in two research questions. The first is on the use of images of atrocities by news organizations. Primarily, I'm interested in the efficacy of this and then idea that we could consume African death and pain while sitting in the confines of our home. So what does that then mean for African victims and why is it okay to do this? But most importantly, who owns these pictures of African pain and what does that then mean for advocacy issues?

The next one that I'm also interested in and should be starting to act on in the spring, is the use of perpetrators as sources when news is being written and news is being collected. So, I'm interested in the relationship between quoting a perpetrator of a mass atrocity and the risk of the intensification of violence. In my prior work it turns out that perpetrators are pretty media savvy and they know that if a Western news organization quotes them, it gives them the kind of cultural capital that they need that they then hope to change into economic capital during negotiation processes and hopefully score a seat in the new regime and the new government that should be coming up.

Why is this important to you?
It's important for us as Africans to be able to tell our stories, but also realize who is telling our stories, because whoever is telling our story owns that particular story. In my undergrad career, I realized that I kept quoting Western academics that were writing about atrocities in African countries, but not necessarily talking to Africans. The challenge is in changing how we raise awareness about mass atrocities and thinking about the unintended consequences of how we've been doing it thus far.

***

published December 1

Keith Porcaro works to enable greater participation by all communities in an age of increasingly complex systems.

Tell us about a research question you’re excited to address this year and why it matters to you.
The big part of my work focuses on how legal norms form and digitizing society. The narrower question that I'm working on here is, how can communities take control and make decisions about the data that they're creating and the data that's being created about them? How can we use existing vehicles, like trusts, as a way to first give communities power to be able to make these decisions, and be able to protect it against uses that they don't want? But then the other side of that is once you've given communities that power, how do you help them understand sort of what the surface area of those decisions are. And how to understand what the ramifications of some of their decisions might be.

I kind of think of it as two sides of a coin so on the one side of the coin, it's how can we use law to deal with new technology, to deal with the fact that increasingly more of our lives are digital or online, and then the other side of that is, how can you use technology to understand how complicated systems work like law or like anything else and especially for people who don't have the time to sort of think about this professionally.

What's a good example of a complex issue?
So for somebody who is you know just facing a legal issue for the first time, or just finding out that somebody is doing a census in their neighborhood, the expectation that we should have is not that they should become a lawyer, they should go to law school, they should learn about how databases work, but it should be what can we use, and what interfaces, what explanations, what structures can we use to help people understand enough of the system to be able to make an informed decision about how it should work.

***

published November 20

Jie Qi is hacking the patent system to make innovation more equitable and impactful.

Tell us about a research question you’re excited to address this year and why it matters to you.
I'm really excited about exploring open innovation, specifically around patents, and how we can hack the patent system to support sharing of inventions, rather than closing it off. The reason I care deeply about this is because as a maker, and an entrepreneur, and an individual that's not a giant company, I'm interested in exploring alternative ways to create and make an impact with my inventions. For example, one of the inventions that I created as part of my PhD research is this idea of using stickers that are also electronics. We took flexible printed circuit boards which we find in our cell phones or laptops or whatever, and we added conductive glue to the bottom of them, such that when you take the sticker, which is a circuit board, and stick it down to like a conductive ink or conductive tape, you actually build circuits, but it feels like you're playing with stickers and tape and pens, and that is kind of a creative, crafty way to learn electronics.

What excites you and scares you the most about technology and its potential impact on our world?
Technology is very powerful. It's a tool, which means it can do wonderful things, and it can do scary things. It itself is not bad. However, with the many forces that are at play in the world I can see people or institutions with means getting control of these technologies and using them in a negative way that perhaps the original inventors didn't imagine or perhaps none of us have ever imagined. What I'm excited about, as someone who creates technology and teaches people electronics and programming, is that it is extremely powerful and it allows you to take the things that are in your imagination and make them real. As an educator, when I see people learn something new and create something that they might not have imagined they could, it's extremely empowering. For me, technology is a way to make you see that the impossible is possible.

***

published November 20

Kathy Pham is bridging the gaps between software engineers and policymakers.

Tell us about a research question you’re excited to address this year and why it matters to you.
Having worked with both a large tech company, as well as within the federal government, I constantly think about how  we build products that are responsible and ethical and take into account our users. Another focus is the intersection of government and technology. How do we get policy folks interested in, and understanding, technology, as well as getting technologists, whether they're engineers, or product managers, or designers, interested in public service or working in the federal government? In my early days as a software engineer, the topics around users and the user experience of something, or even the broader social impact of what we build, wasn't always there.

What are some ideas for addressing this topic?
One of the things that has come up here at Berkman is attacking it from the curricula level: really teaching our computer scientists and engineers how to critically think about the effects in the long term, or even short term effects, of what we build. Think about some of the implications of collecting data. Think about what happens when the data is stored long term. Think about how something can be misused or not used the way we intended for it to be used. What can we do in the policy space that makes sense? You know, it gets tricky because we we get into the free speech realm of we don't want to restrict the ability to build products or people's freedom of speech on different platforms, but what is the responsibility of tech companies in looking at their users?

What excites you the most about technology and its potential impact on our world?
How can we use technology to really provide better government services for people, people who can't go in-person to different government service locations to get care, whether they're veterans, or people who need to get services? How can we use technology to really make their lives a lot better?  I'm very excited to think about different ways that technology can be used to provide care services our most vulnerable populations and the people who need help the most.

***

published November 13

Jenny Korn is examining new and evolving representations of race and identity, both online and off.

Tell us about a research question you’re excited to address this year and why it matters to you.

I'll be looking at the way people talk about race and gender, both online and in person. I've pretty much always been interested in issues of race because I'm a woman of color that grew up in Alabama. I was reminded of my race in both positive and negative ways at a very early age and ever since then.

Why should people care about this issue?

Talking about race more openly promotes, my hope is that it promotes, a more just society eventually. Because if we can't talk about race then we definitely can't talk about racism. And so we have to get to the point where talking about race is not uncomfortable or feels forced. But rather feels the same as saying what your gender is or what your sexual orientation is. To say it all together naturally and comfortably so that all of us can discuss what that means to everybody across different levels of society.

What excites you the most about technology and its potential impact on our world?

The Internet has definitely changed the way that we socialize but also the ways that we interact with what we believe race is. We're not only consumers of the Internet. We're also producers. We actually can create different ways to discuss race. We can share different representations of race. And to me that's really exciting because we are able to reduce the distance and the time and the speed to creating those representations online instead of relying on publishers for books, or producers and distributors for movies. We can overlook that and use the Internet as the way to produce and broadcast and share those representations. And that means we can change old stereotypes and make new representations of what we believe it is to be of color, or to be white. It's a brand new medium in terms of how far we can get this kind of message. I'm excited by the possibilities.  

***

published November 13

Nathan Kaiser is a lawyer studying AI and Asia.

Please introduce yourself and tell us about a research question you’re excited to address this year.

I'm a fellow at the Berkman Klein Center. I’m very happy to be here. I'm originally Swiss but spent the past years in in Asia. I'm looking at AI and always from China and maybe a larger Asian angles. The research question is -- it's partially copy/paste from the 10 or 15 year old question. "What about the Internet in China and outside?" And now the question is "What about AI in China and outside of China?" There's a lot of a lot of stuff to to be looked at.

Why should people care about this issue?

AI will have as big an impact on society. And society always means me, you, and the family, and everybody around us. Just as with the Internet years ago, and over time. It would not be wise to say the Internet  is not for me or to say nowadays AI is not for me because it's going to be around you anyway. And so from a personal individual point of view or a company point of view or even a family point of view, I think it makes sense to start looking around and see what's going on. Does it help you? Does it hurt you? Should you use it? Should you not use it? Then once it's clear that you should use it and how do you use it? What are the tools, what are the risks for employees, risk for companies, risk for kids?

What scares you the most about technology and its potenital impact on our world?

I'm always worried about the people who are not able to enjoy a technology. I think that scares me because it creates a even larger gap. You don't only have rich people and poor people. You have an additional divide of using technology or not using technology. Being able and having the money to use technology will make the rich richer and the poor more poor. So that's something that scares me because it creates tension and we've seen that over the past 10 or 20 years.

***

published November 6, 2017

Joanne K. Cheung is an artist and designer studying at the Harvard Graduate School of Design.

Tell us about a research question you're excited to address this year and why it matters to you.

This year I’m developing an analytical framework for looking at public space, so physical space, all the stuff around us, and discourse on the internet. My background is in the fine arts, so I’ve always cared about how to communicate something, how something appears to someone not from my own discipline. A lot of this came from, well, I guess two things. One is the very jarring experience of the past election and realizing that geographically, my understanding of the country that I live in is very different than what I thought it was. Also, this summer I became an American citizen, so learning everything about the democratic process was really interesting, and I thought that I wanted to understand how my own discipline intersected with the political process.

Why should people care about this issue?

There is no opting out of existing in this system and I think now that everyone is so deeply connected... the other side of that is we’re all more alienated from the subjects of our actions, so whether they’re intentional, unintentional, or accidental, I think making those connections visible is really important now. 

What excites (or scares) you most about technology and its potential impact on our world?

I always go back to that William Gibson quote, “the future is already here, it’s just not evenly distributed,” but then I go back and think about my own discipline, which is dealing with land and buildings, and I was thinking, well, the future doesn’t distribute itself, has land ever been evenly distributed? I can’t think of something that has. A lot of that comes down to human agency, it comes down to decisions humans make. I think it’s not technology doing the work, it’s people doing the work, and so, maybe that gives me some worry, it scares me because I want to define who those humans are, but it also gives me a little bit of hope because I’m a human, we all are, so there is potential for making change and making a difference.

***

published November 6, 2017

Emad Khazraee is a sociotechnical information scientist and an assistant professor in the school of information (iSchool) at Kent State University.

Tell us about a research question you're excited to address this year and why it matters to you.

Broadly, speaking, I’m interested in how human collectives use information technology to achieve their collective goals. I look at two levels. At one level, I look at very large collectives, how they use information technology for social transformations, for example, how activists use information technology to challenge authorities. On another level, I’m looking at very tightly connected communities, I call them communities of practice, how they use information technologies to produce knowledge. At Berkman Klein Center, I am looking to understand how we can theorize the dynamic of evolution of the tools and methods that activists use to challenge authorities. On a personal side, I’m Iranian and I have seen a lot of transformations in recent years happening in Iranian society. We’ve seen a very young population, educated population, use information technology to progress the state of society.

Why should people care about this issue?

We are living in an era that the pace of technology, changes and advancements, is so high, that some people have become anxious about what the impact of technology is in our society. It’s very important to see whether it helps us to improve our society or not. I think that’s how it is important for the average person, to see, in many contexts, such as oppressive environments, whether the use of information technology can be a force shifting the balance towards a more just and progressive society, or it might give more tools for oppressive governments to repress and restrict humans’ freedom.

***

published October 27, 2017

Desmond Upton Patton, PhD, MSW is an Assistant Professor of Social Work at Columbia University and a faculty affiliate of the Data Science Instiute and the Social Interevention Group at Columbia University. 

Tell us about a research question you're excited to address this year and why it matters to you.
This year I'm really trying to understand how communication on social media leads to offline violence. So I'm studying a Twitter dataset of young people in Chicago to better understand how things like grief and trauma and love and happiness all play out on Twitter and the relationship between that communication and offline gun violence. 

I started my research process in Chicago and I have been just completely troubled by the amount of violence that happens in the city. And one of the ways in which that violence happens or occurs is through social media communication. And so I want to be a part of the process of ending violence through learning how young people communicate online.  

***

published October 27, 2017

Jenn Halen is a doctoral candidate in Political Science at the University of Minnesota and a former National Science Foundation Graduate Research Fellow.

Tell us about a research question you're excited to address this year and why it matters to you.
I’m working on the ethics and governance of artificial intelligence project, here at Berkman Klein. There are a lot of questions as to how exactly incorporating this new technology into different social environments is really going to affect people, and I think one of the most important things is getting people’s perspectives who are actually going to be impacted. So, I’m looking forward to participating in some early educational initiatives and some discussions that we can post online in blog posts and things, to help people feel like they’re more familiar with this subject and more comfortable, because it can be really intimidating.

Why should people care about this issue?
Right now, this technology or early versions of machine learning and artificial intelligence applications are being used in institutions ranging from the judicial system, to financial institutions, and they’re really going to impact everyone. I think it’s important for people to talk about how they’re being implemented and what the consequences of that are for them, and that we should have an open discussion, and that people can’t do that if they’re unfamiliar with the technology or why it’s being employed. I think that everyone needs to have at least a basic familiarity with these things because in ten years there’s not going to be an institution that doesn’t use it in some way.

How did you become interested in this topic?
I grew up in a pretty low income community that didn’t have a lot of access to these technologies initially, and so I was very new to even using a computer when I got into college. It’s something that was hard for me initially, but that I started really getting interested in, partially because I’m a huge sci-fi fan now, and so I think that sci-fi and fiction really opens up your eyes to both the opportunities and the potential costs of using different advanced technologies. I wanted to be part of the conversation about how we would actually approach a future where these things were possible and to make sure that we would use them in a way that would benefit us and not this scarier, more dystopian views of what could happen.

What excites you most about technology and its potential impact on our world?
Software, so scalable, that we can offer more resources and more information to so many more people at a lower cost. We’re also at a time where we have so much more information than we’ve ever had in history, so things like machine learning and artificial intelligence can really help to open up the answers that we can get from all of that data and maybe some very non-intuitive answers that people just have not been able to find themselves.

What scares you most?
I think that the thing that scares me most is that artificial intelligence software is going to be employed in institutions and around populations that don’t understand both ends of the things it has to offer, but also its limitations. It will just be taken as objective fact or a scientific opinion that you can’t question, when it’s important to realize that this is something that is crafted by humans, that can be fallible, that can be employed in different ways and have different outcomes. I think my biggest fear is that we won’t question it and that these things will just be able to be deployed without having any kind of public dialogue or pushback if it has negative consequences.