#FellowFriday! Get to know the 2017-2018 Fellows

This series of short video interviews highlights the new 2017-2018 Berkman Klein fellows. Check back every week for new additions!

 

published November 20

Jie Qi is hacking the patent system to make innovation more equitable and impactful.

Tell us about a research question you’re excited to address this year and why it matters to you.
I'm really excited about exploring open innovation, specifically around patents, and how we can hack the patent system to support sharing of inventions, rather than closing it off. The reason I care deeply about this is because as a maker, and an entrepreneur, and an individual that's not a giant company, I'm interested in exploring alternative ways to create and make an impact with my inventions. For example, one of the inventions that I created as part of my PhD research is this idea of using stickers that are also electronics. We took flexible printed circuit boards which we find in our cell phones or laptops or whatever, and we added conductive glue to the bottom of them, such that when you take the sticker, which is a circuit board, and stick it down to like a conductive ink or conductive tape, you actually build circuits, but it feels like you're playing with stickers and tape and pens, and that is kind of a creative, crafty way to learn electronics.

What excites you and scares you the most about technology and its potential impact on our world?
Technology is very powerful. It's a tool, which means it can do wonderful things, and it can do scary things. It itself is not bad. However, with the many forces that are at play in the world I can see people or institutions with means getting control of these technologies and using them in a negative way that perhaps the original inventors didn't imagine or perhaps none of us have ever imagined. What I'm excited about, as someone who creates technology and teaches people electronics and programming, is that it is extremely powerful and it allows you to take the things that are in your imagination and make them real. As an educator, when I see people learn something new and create something that they might not have imagined they could, it's extremely empowering. For me, technology is a way to make you see that the impossible is possible.

***

published November 20

Kathy Pham is bridging the gaps between software engineers and policymakers.

Tell us about a research question you’re excited to address this year and why it matters to you.
Having worked with both a large tech company, as well as within the federal government, I constantly think about how  we build products that are responsible and ethical and take into account our users. Another focus is the intersection of government and technology. How do we get policy folks interested in, and understanding, technology, as well as getting technologists, whether they're engineers, or product managers, or designers, interested in public service or working in the federal government? In my early days as a software engineer, the topics around users and the user experience of something, or even the broader social impact of what we build, wasn't always there.

What are some ideas for addressing this topic?
One of the things that has come up here at Berkman is attacking it from the curricula level: really teaching our computer scientists and engineers how to critically think about the effects in the long term, or even short term effects, of what we build. Think about some of the implications of collecting data. Think about what happens when the data is stored long term. Think about how something can be misused or not used the way we intended for it to be used. What can we do in the policy space that makes sense? You know, it gets tricky because we we get into the free speech realm of we don't want to restrict the ability to build products or people's freedom of speech on different platforms, but what is the responsibility of tech companies in looking at their users?

What excites you the most about technology and its potential impact on our world?
How can we use technology to really provide better government services for people, people who can't go in-person to different government service locations to get care, whether they're veterans, or people who need to get services? How can we use technology to really make their lives a lot better?  I'm very excited to think about different ways that technology can be used to provide care services our most vulnerable populations and the people who need help the most.

***

published November 13

Jenny Korn is examining new and evolving representations of race and identity, both online and off.

Tell us about a research question you’re excited to address this year and why it matters to you.

I'll be looking at the way people talk about race and gender, both online and in person. I've pretty much always been interested in issues of race because I'm a woman of color that grew up in Alabama. I was reminded of my race in both positive and negative ways at a very early age and ever since then.

Why should people care about this issue?

Talking about race more openly promotes, my hope is that it promotes, a more just society eventually. Because if we can't talk about race then we definitely can't talk about racism. And so we have to get to the point where talking about race is not uncomfortable or feels forced. But rather feels the same as saying what your gender is or what your sexual orientation is. To say it all together naturally and comfortably so that all of us can discuss what that means to everybody across different levels of society.

What excites you the most about technology and its potential impact on our world?

The Internet has definitely changed the way that we socialize but also the ways that we interact with what we believe race is. We're not only consumers of the Internet. We're also producers. We actually can create different ways to discuss race. We can share different representations of race. And to me that's really exciting because we are able to reduce the distance and the time and the speed to creating those representations online instead of relying on publishers for books, or producers and distributors for movies. We can overlook that and use the Internet as the way to produce and broadcast and share those representations. And that means we can change old stereotypes and make new representations of what we believe it is to be of color, or to be white. It's a brand new medium in terms of how far we can get this kind of message. I'm excited by the possibilities.  

***

published November 13

Nathan Kaiser is a lawyer studying AI and Asia.

Please introduce yourself and tell us about a research question you’re excited to address this year.

I'm a fellow at the Berkman Klein Center. I’m very happy to be here. I'm originally Swiss but spent the past years in in Asia. I'm looking at AI and always from China and maybe a larger Asian angles. The research question is -- it's partially copy/paste from the 10 or 15 year old question. "What about the Internet in China and outside?" And now the question is "What about AI in China and outside of China?" There's a lot of a lot of stuff to to be looked at.

Why should people care about this issue?

AI will have as big an impact on society. And society always means me, you, and the family, and everybody around us. Just as with the Internet years ago, and over time. It would not be wise to say the Internet  is not for me or to say nowadays AI is not for me because it's going to be around you anyway. And so from a personal individual point of view or a company point of view or even a family point of view, I think it makes sense to start looking around and see what's going on. Does it help you? Does it hurt you? Should you use it? Should you not use it? Then once it's clear that you should use it and how do you use it? What are the tools, what are the risks for employees, risk for companies, risk for kids?

What scares you the most about technology and its potenital impact on our world?

I'm always worried about the people who are not able to enjoy a technology. I think that scares me because it creates a even larger gap. You don't only have rich people and poor people. You have an additional divide of using technology or not using technology. Being able and having the money to use technology will make the rich richer and the poor more poor. So that's something that scares me because it creates tension and we've seen that over the past 10 or 20 years.

***

published November 6, 2017

Joanne K. Cheung is an artist and designer studying at the Harvard Graduate School of Design.

Tell us about a research question you're excited to address this year and why it matters to you.

This year I’m developing an analytical framework for looking at public space, so physical space, all the stuff around us, and discourse on the internet. My background is in the fine arts, so I’ve always cared about how to communicate something, how something appears to someone not from my own discipline. A lot of this came from, well, I guess two things. One is the very jarring experience of the past election and realizing that geographically, my understanding of the country that I live in is very different than what I thought it was. Also, this summer I became an American citizen, so learning everything about the democratic process was really interesting, and I thought that I wanted to understand how my own discipline intersected with the political process.

Why should people care about this issue?

There is no opting out of existing in this system and I think now that everyone is so deeply connected... the other side of that is we’re all more alienated from the subjects of our actions, so whether they’re intentional, unintentional, or accidental, I think making those connections visible is really important now. 

What excites (or scares) you most about technology and its potential impact on our world?

I always go back to that William Gibson quote, “the future is already here, it’s just not evenly distributed,” but then I go back and think about my own discipline, which is dealing with land and buildings, and I was thinking, well, the future doesn’t distribute itself, has land ever been evenly distributed? I can’t think of something that has. A lot of that comes down to human agency, it comes down to decisions humans make. I think it’s not technology doing the work, it’s people doing the work, and so, maybe that gives me some worry, it scares me because I want to define who those humans are, but it also gives me a little bit of hope because I’m a human, we all are, so there is potential for making change and making a difference.

***

published November 6, 2017

Emad Khazraee is a sociotechnical information scientist and an assistant professor in the school of information (iSchool) at Kent State University.

Tell us about a research question you're excited to address this year and why it matters to you.

Broadly, speaking, I’m interested in how human collectives use information technology to achieve their collective goals. I look at two levels. At one level, I look at very large collectives, how they use information technology for social transformations, for example, how activists use information technology to challenge authorities. On another level, I’m looking at very tightly connected communities, I call them communities of practice, how they use information technologies to produce knowledge. At Berkman Klein Center, I am looking to understand how we can theorize the dynamic of evolution of the tools and methods that activists use to challenge authorities. On a personal side, I’m Iranian and I have seen a lot of transformations in recent years happening in Iranian society. We’ve seen a very young population, educated population, use information technology to progress the state of society.

Why should people care about this issue?

We are living in an era that the pace of technology, changes and advancements, is so high, that some people have become anxious about what the impact of technology is in our society. It’s very important to see whether it helps us to improve our society or not. I think that’s how it is important for the average person, to see, in many contexts, such as oppressive environments, whether the use of information technology can be a force shifting the balance towards a more just and progressive society, or it might give more tools for oppressive governments to repress and restrict humans’ freedom.

***

published October 27, 2017

Desmond Upton Patton, PhD, MSW is an Assistant Professor of Social Work at Columbia University and a faculty affiliate of the Data Science Instiute and the Social Interevention Group at Columbia University. 

Tell us about a research question you're excited to address this year and why it matters to you.
This year I'm really trying to understand how communication on social media leads to offline violence. So I'm studying a Twitter dataset of young people in Chicago to better understand how things like grief and trauma and love and happiness all play out on Twitter and the relationship between that communication and offline gun violence. 

I started my research process in Chicago and I have been just completely troubled by the amount of violence that happens in the city. And one of the ways in which that violence happens or occurs is through social media communication. And so I want to be a part of the process of ending violence through learning how young people communicate online.  

***

published October 27, 2017

Jenn Halen is a doctoral candidate in Political Science at the University of Minnesota and a former National Science Foundation Graduate Research Fellow.

Tell us about a research question you're excited to address this year and why it matters to you.
I’m working on the ethics and governance of artificial intelligence project, here at Berkman Klein. There are a lot of questions as to how exactly incorporating this new technology into different social environments is really going to affect people, and I think one of the most important things is getting people’s perspectives who are actually going to be impacted. So, I’m looking forward to participating in some early educational initiatives and some discussions that we can post online in blog posts and things, to help people feel like they’re more familiar with this subject and more comfortable, because it can be really intimidating.

Why should people care about this issue?
Right now, this technology or early versions of machine learning and artificial intelligence applications are being used in institutions ranging from the judicial system, to financial institutions, and they’re really going to impact everyone. I think it’s important for people to talk about how they’re being implemented and what the consequences of that are for them, and that we should have an open discussion, and that people can’t do that if they’re unfamiliar with the technology or why it’s being employed. I think that everyone needs to have at least a basic familiarity with these things because in ten years there’s not going to be an institution that doesn’t use it in some way.

How did you become interested in this topic?
I grew up in a pretty low income community that didn’t have a lot of access to these technologies initially, and so I was very new to even using a computer when I got into college. It’s something that was hard for me initially, but that I started really getting interested in, partially because I’m a huge sci-fi fan now, and so I think that sci-fi and fiction really opens up your eyes to both the opportunities and the potential costs of using different advanced technologies. I wanted to be part of the conversation about how we would actually approach a future where these things were possible and to make sure that we would use them in a way that would benefit us and not this scarier, more dystopian views of what could happen.

What excites you most about technology and its potential impact on our world?
Software, so scalable, that we can offer more resources and more information to so many more people at a lower cost. We’re also at a time where we have so much more information than we’ve ever had in history, so things like machine learning and artificial intelligence can really help to open up the answers that we can get from all of that data and maybe some very non-intuitive answers that people just have not been able to find themselves.

What scares you most?
I think that the thing that scares me most is that artificial intelligence software is going to be employed in institutions and around populations that don’t understand both ends of the things it has to offer, but also its limitations. It will just be taken as objective fact or a scientific opinion that you can’t question, when it’s important to realize that this is something that is crafted by humans, that can be fallible, that can be employed in different ways and have different outcomes. I think my biggest fear is that we won’t question it and that these things will just be able to be deployed without having any kind of public dialogue or pushback if it has negative consequences.

 

 

 
 

Last updated

November 20, 2017