Skip to the main content
Aparna Balagopalan & Greg M. Epstein Q&A:

Aparna Balagopalan & Greg M. Epstein Q&A:

The Berkman Klein Center is thrilled to welcome two new Affiliates whose work stretches from the future of healthcare AI to the evolving role of technology in our lives. Aparna Balagopalan brings fresh insight into fair, interpretable, and responsible AI in healthcare, while Greg M. Epstein, humanist chaplain at Harvard and MIT, invites us to rethink technology’s role in our moral imaginations. We sat down with both for a quick Q&A about what drives their research.


Which community or communities do you hope to most impact with your work?  

Aparna Balagopalan (AB): Much of my prior work on improving AI reliability has been interdisciplinary, and in collaboration with experts in machine learning, healthcare, and law. I hope that my work can continue to contribute to efforts in such multidisciplinary spaces.

Greg M. Epstein (GE): I’m very gratified that my fall 2024 book Tech Agnostic seems to be making an impact in the tech world, in the sense of highlighting the need for tech and AI leaders to think about religion — and especially about how the tech and AI worlds are themselves behaving much like religion at the moment. In other words, Silicon Valley tech and AI are making a broad and deep impact on many of the ways in which we understand what it means to be human today. I’d like to deepen such conversations within tech circles. And I’m also increasingly interested in talking with religious communities (and explicitly secular/nonreligious/humanist communities like my own) about tech — in particular about how the world of tech and AI is increasingly encroaching on religion’s territory of thinking about and taking dramatic action to shape the nature of humanity.

What’s something about your work that you think would surprise people?

AB: Perhaps how often techniques to build trustworthy AI systems, both within my work and the ML space more broadly, are tied to understanding human processes rather than model optimization. For example, by improving data-related design choices or strengthening human-AI collaboration loops. Humans truly play a pivotal role, even within the scope of automation.

GE: Well, I am an atheist who does public work on the ethics and communal practices of atheism and agnosticism, and who advises people on ethical and existential concerns in a professional role usually reserved for religious clergy. Over the past several years I’ve become an avid researcher and writer on the relationship between technology and humanity. I’m not sure we need to get more surprising than that!

Why is your work important right now? 

AB: Given the wide-ranging impact of generative AI, I strongly believe that understanding why models fail, and adapting decision-making processes to augment such failure modes is important. For example, consider an online content moderation system automated using LLMs. Verifying if outputs from such systems are right for the right reasons becomes critical given the scale of such predictions. I’m excited to push forward on work in this direction.

GE: There’s no question that Silicon Valley and other communities working on and around AI are significantly altering the human condition and changing our sense of what it means to be human, right? But there are huge and important questions about how much change is occurring, and whether the changes are tending towards the positive or the negative. I think we haven’t grappled nearly enough as a society with the fact that tech has become a moral issue, even one of the defining moral issues of our time.

Aparna, what questions do you think researchers and practitioners should be asking at the data collection stage—especially in healthcare contexts—that are not being asked today?

AB: I think we need to carefully consider whether we want to replicate historical decision-making patterns when building models in the healthcare space, and curate data with this question in mind. Additionally, I think that centering the voices of patients, clinicians, and other experts in the data collection stages is crucial.

In your work on fair ranking and interpretability, where do you see the biggest disconnect between what technical tools measure and what people actually need to trust a system?

AB: Fairness, interpretability, and trust are complex constructs, with varying definitions and dimensions, especially when considering the global context. In such settings, metrics and tools can flag glaring issues but very often AI systems are continuously evolving and/or have different deployment contexts. Technical tools may miss such nuance, and lead to miscalibrated trust. For example, techniques proposed to improve interpretability in general purpose ML models may be insufficient in the healthcare space, where high transparency is required given the safety-critical nature of decisions.

Thus, domain specificity, continuous qualitative and quantitative evaluations, and collecting diverse human feedback become requirements rather than nice-to-haves to bridge such gaps.

Greg, as a Humanist chaplain working closely with students and researchers, what themes or concerns about technology come up most often in your conversations with young people?

Dramatic technological change — especially when compounded by climate change and what I think of as changes in our individual and collective values (around religious belief, sexuality and gender, democracy and economics) — is creating a future that stands to look so different from any previous human society. Young people are actively trying to figure out: where does my life fit into a world that will be different, by the time I’m older, from anything I might be able to predict or even imagine, today? Who do I want to be, in the face of so much unpredictability? How do I want to try to live? What do I want to take part in building? That’s what most of the conversations I’m having come down to.

Your newest book suggests we need new “ethical authorities” for a technological age. What would a more responsible or human-centered authority structure in tech actually look like in practice?

I don’t think there’s any one model. We need to hear from thoughtful religious leaders, secular philosophers, artists, psychologists and social workers, political and labor activists, and much more. Certainly, as I made a great effort to explore in Tech Agnostic, we need to recognize that many of the top voices and leaders in human-centered tech are women or people of color, or both. But overall we need to cultivate lots of broadly distributed, interconnected, emotionally intelligent leadership rather than any one particular (type of) savior. As MLK said in his Letter from Birmingham Jail, we are “caught in an inescapable network of mutuality, tied in a single garment of destiny.”

You might also like