Skip to the main content
Inside the Black Box
icon-community

Inside the Black Box

New tools reveal how AI models “see” users and raise questions about transparency

Researchers from Harvard’s Insight and Interaction Lab built an interpretability dashboard that shows a chatbot’s internal assumptions about a user — such as age, gender, class, and race — making visible what usually remains hidden inside generative AI systems. The project spotlights how these inferred models shape responses, sparking discussion about bias, privacy, and whether greater transparency should be required of AI developers. "If we talk to another human, we in no way expect them to come in as a tabula rasa. And I think we should treat AI the same way—that it's something that has a point of view."

 

You might also like