Skip to the main content

Get To Know 23-24 BKC Fellow: Huili Chen

Huili Chen is a 2023-2024 Berkman Klein Center Fellow and a Ph.D. graduate from the Massachusetts Institute of Technology Media Lab, where she conducted multi-disciplinary research on human-AI interaction spanning computer science, psychology, and peace studies. Her fascination with the intersection of humans and intelligent machines started with curiosity about how both sides behave and influence each other. Dr. Chen’s work is a response to current approaches to developing AI systems that are too technology-centric, emphasizing efficiency and transactional information exchange without sufficiently integrating human factors and the richness of human relationships. She aims to reframe the field of human-AI interaction with a conceptual shift: rather than humans in service of AI, how can we put AI in service of human flourishing? This paradigm views AI not in isolation, but as part of a coupled human-AI system situated within broader social contexts.

What originally inspired you to explore the idea of AI as a "social catalyst" that fosters human connections?

[In] the current way of studying, developing, and examining AI systems, the focus is too much on AI rather than the humans. It's like integrating human factors into this automated information or exchanges between humans and AI. So, this focus on information-driven or task-oriented conversational communication between AI and humans is not really what genuine human communication is like. It doesn't reflect the depth and richness of human communication. I don't think it's a way towards using technology to foster human flourishing. In my research, I have been pondering over how to shift the way we design, develop, and then evaluate the interactions between humans and AI—to have this conceptual paradigm shift from developing “human-in-the-loop" AI systems to one promoting “AI-in-the-loop" human flourishing, putting AI into this human ecosystem instead of putting humans into AI technologies. So, it's a mindset shift!

To be specific, I think the moment I became very intrigued into designing this social catalyst AI, was about four or five years ago, when there was this growing concern from the public around AI agents and how they may undermine human-human relationships. This concern was especially prevalent when it comes to vulnerable populations like children. There were a bunch of articles written by psychologists, social scientists, and education practitioners talking about how social robots could replace the relationship children would build with their parents or would build with their friends. Their argument was that the relationship children build with robots is not a genuine relationship because it's this artificial friendship or artificial emotional bonding. I don’t fully agree with that, but to some degree, the argument also has its legitimacy. I draw inspiration from that, and then try to think “Oh, what if we could design an intelligent machine that performs this role of social catalyst, to foster human-human connection rather than undermine human connection?”

One way to do that is to directly have multiple humans involved in this interaction. Also, if you look at how humans interact with humans, not every human-human relationship is constructive. For example, in education or in psychology, we know that not every single parent knows how to interact with their child or knows the way to foster their language or emotional development. So, what if we could have a robot there to facilitate the parent-child reciprocal interaction? That's been the motivation!

You’ve talked about using AI to facilitate human-human relationships. I'm interested in how you see technologies like generative AI facilitating human-human connections and relationships. Have you encountered any criticism that your approach puts too much emphasis on AI, or that it dilutes the genuineness of human relationships? How do you respond to that?

I have been surrounded with all these concerns around me. I think when we talk about putting too much emphasis on AI systems, we are putting the AI assistant in isolation from humans, human norms, human values, and then human society. But I think it's critical to realize that human-AI interaction is a coupled system. It's not like two isolated entities, humans and AI's. They are like a coupled system! They mutually influence each other, and then this coupled system is further situated within a much larger social and cultural context. I think no matter whether it's the public or the researcher, we need to be aware that it's all these layers that AI systems are embedded in. It's social, its relationships; it's not an engineering problem. If too much focus is on AI and we frame AI only as an engineering problem or question, that could be quite troublesome.

You've discussed high-level paradigm shifts in how we conceptualize human-AI interaction. Can you give some examples of tangible design decisions or evaluation metrics that could help achieve this reframing centered on human well-being specifically from your multi-disciplinary perspective?

The overarching goal of my work, as I mentioned already, is to make this conceptual paradigm shift from “human-in-the-loop" systems to “AI-in-the-loop human flourishing.” I also talk about human-AI interaction as a coupled system situated within the larger social cultural context. What this entails is that it's not an engineering problem. It requires a different understanding of human psychology with emphasis on interaction design. Just think about humans, AI, and interaction, then we can decouple each component and see how each discipline or disciplinary knowledge and method are interwoven into each component.

For the human part on human-AI interaction, we need to see psychology and social science in two aspects. One is more on the scientific investigation aspect. The other one is more on the theory-inspired interaction with tech design. For the scientific investigation, developing a technology is important but what is also important is [that] we need to study the impact of technology on humans, such as the human dynamics, such as how parents talk to their child, the way the parents talk to their child. That's one example.

So, how do we study this? We use a method in psychology. We focus on what is the mechanism behind this influence of technology. We do that with the human subject in randomized control trial experiments to evaluate and to a single out factors. That’s on the scientific investigation side. For the theory-inspired interaction with tech design, we integrate psychological theories, such as flow theory or human-human mimicry theories, into the design and the evaluation of AI assistance. So, just think about ChatGPT [and] the way we communicate with it. The end goal of that communication is still pretty much information exchanges. But how do we redefine the goal of the interaction? We could use flow theory to redefine the goal of interaction. If that's the case, then the goal of the interaction is to scaffold users or individuals to always keep them in this flow zone, then their skill level would increase, they would always feel very emotionally engaged and satisfied. So that’s how we could potentially integrate flow theory into the goal of AI. That's the human part, right?

[That is] why we need social sciences. Then, we have this AI or robot part, which is more straightforward. In the end we still need the systems. They still need to have some sort of social and affective capabilities to support autonomous or semi-autonomous interaction with humans. Then computational models would be needed to understand human dynamics, human behaviors, and computational models would also be needed to guide the robot's behaviors, how the robot should interact with humans. That's the AI part.

Then in human-AI interaction, we have the interaction part. The important part to me is between the social science/psychology part and the AI/engineering part, “interaction” is this bridge there. Design or interaction design asks questions such as, how is the technology is designed? Designed by whom? For whom? In what context? How is the technology being used by humans, or even misused by some humans? The further, deeper question is who even imagined the technology to be valuable or to be necessary in the first place? Is it the scientist, or is it the people, the user of technology? That’s why the interaction design, such as qualitative analysis and human-centered design, are critical in the framework, because we use those methodologies trying to really understand the technology design from humans’ perspective, from the users of the technology. That’s how the three disciplinary pillars are positioned in the framework I propose in my work.

The framework that you have just discussed is quite insightful; how do you see it informing your time at BKC and your goals for the time here?

I think it's two levels. By the way, I have always been intrigued by the Berkman Klein Center [since] many years ago. I think it's between 2017 to 2020 that there was a project collaboration between BKC and the [MIT] Media Lab around the ethics and the governance of AI initiatives, and that's how I got to know about BKC. It's a very trans- and interdisciplinary place, with people coming from various stages inside and outside academia from all over the world. I think these transdisciplinary and cross-sectional culture of the vaccine would lead to what I hope to accomplish during my year over there, which is twofold. One is more academic research-wise, which is more exploratory than my current work. First, I want to conduct exploratory research focused on contextualizing human-AI interaction within broader social and cultural frameworks. Whose notions of intelligence and progress are shaping AI? What types of relationships and exchanges are seen as valuable? The question is never objective. The answers are never neutral—it's actually very socially grounded in ideological history and found in race, gender, and other social culture lenses. This is why I hope to dive deeper into this, to sketch out a landscape, a new landscape of human-AI interaction that would integrate these perspectives into account, but also communicate it in a way that is understandable to AI researchers, designers and public. I think that's why BKC is really the ideal place to do that!

Another thing is because we also have practitioners coming in—they are not just academics; [they] are people who focus on problem solving and people doing regulation, law, and business—I'm hoping my research can be explored in a way to be linked to the real-world implications and also translated to a broader audience. That's on the research side. For the personal side, it's similar where I’m hoping to explore ideas and try out new things, especially at the current stage of my life.

Interviewer

Mohsin Yousufi is a Ph.D. student in Digital Media at the Georgia Institute of Technology. During the summer of 2023, he interned with the Berkman Klein Center’s Institute for Rebooting Social Media.