Skip to the main content

Prof. Upol Ehsan makes AI systems responsible and explainable so that people who aren't at the table don't end up on the menu. He is a Computer Science Professor at Northeastern University, was a Berkman Klein Fellow at Harvard University and is a Faculty Associate there.

He earned his PhD at Georgia Tech where his work invented the field of Human-centered Explainable AI. His award-winning work has been internationally recognized at top-tier venues like ACM CHI, AAAI, and HCII, receiving national and international press coverage like MIT Tech Review, CNN, and VentureBeat. He has served on multiple international government-level AI task forces spanning the US, EU, and South Asia. His work has been used by the United Nations, NIST, the Mozilla Foundation, and the Center for Democracy and Society to govern AI systems. He has worked at Microsoft Research (FATE), IBM Research, and Google. His work is generously supported by the NSF, DARPA, A2I, IBM, the World Bank along with prestigious fellowships like the Prime Minister’s Innovator Award. 

He led the editorial efforts for the inaugural ACM TiiS Journal on HCXAI and spearheads the organization of the longest running workshop series (the Human-centered XAI HCXAI workshops) at CHI every year since 2021. Outside research, he has led high performing teams in management consulting and serves on the boards of startups. He is also an advisor for Aalor Asha, an educational institute he started for underprivileged children subjected to child labo

Community

Khoury College of Computer Sciences

Khoury professor earns Harvard Berkman Klein fellowship to research harms of dead AI systems

Northeastern's Khoury College shares the news of Upol Ehsan's BKC Fellowship.

Jul 1, 2025
Channel 24 Bangladesh

The AI Storm is Coming - Is Bangladesh Ready?

Fellow Upol Ehsan discusses his own academic background and surveys the challenges of defining AI and setting the parameters for its application in the Global South.

Jan 30, 2025
arXiv

Seamful XAI

Operationalizing Seamful Design in Explainable AI

Can AI errors help users?

Mar 5, 2024