Skip to the main content
AI Triad: A Dialogue Across Differences

AI Triad: A Dialogue Across Differences

Fall Speaker Series

 

Artificial intelligence isn’t just a technology—it’s a battleground of competing values, incentives, and worldviews.

Accelerationists see AI as a force for human progress, unlocking new frontiers of innovation and economic growth. Safetyists warn that its rapid development could unleash existential risks that outpace our capacity for control. Skeptics take issue with salvation and doomsday narratives, instead grappling with how AI’s deployment amplifies existing social and economic inequalities. Too often, these camps speak past one another. Yet to chart a responsible path forward, their dialogue is essential.

This conversation brings together Jason Crawford (Roots of Progress), Amba Kak (AI Now Institute), and Brian McGrail (Center for AI Safety Action Fund) to explore the fault lines and shared assumptions among these major schools of thought, what we at the Berkman Klein Center call the AI Triad. Together, they examine where accelerationists, safetyists, and skeptics most productively disagree, and where their goals may unexpectedly align.

Moderated by BKC Faculty Director Jonathan Zittrain, this event is part of our ongoing effort to foster dialogue across divided intellectual communities and to surface the moral, technological, and empirical premises driving the AI debate.

Speakers

Jason Crawford

Jason Crawford is the founder of the Roots of Progress Institute, a nonprofit dedicated to building a culture of progress for the 21st century. He is the author of The Techno-Humanist Manifesto, forthcoming from MIT Press, and the host of the Progress Conference. Previously, he spent 18 years as a software engineer, engineering manager, and startup founder.

Amba Kak

Amba Kak has spent the last fifteen years designing and advocating for technology policy in the public interest, ranging from network neutrality to privacy to algorithmic accountability, across government, industry, and civil society – and in many parts of the world. Amba brings this experience to her current role co-leading AI Now, a US-based research institute where she leads on advancing diagnosis and actionable policy recommendations to tackle concerns with artificial intelligence and concentrated power. Amba recently completed her term as Senior Advisor on AI at the Federal Trade Commission. Prior to AI Now, she was Global Policy Advisor at Mozilla; and also previously served as legal advisor to India’s telecommunications regulator (TRAI) on net-neutrality rules.

Brian McGrail

Brian McGrail is Policy Lead and Senior Counsel for the Center for AI Safety Action Fund, a nonpartisan advocacy organization dedicated to mitigating national security risks from advanced AI. He previously served as Senior Advisor to the Deputy Secretary of Commerce, covering the intersection of critical technologies and national security. Before that, Brian clerked for two federal judges and litigated in the private sector. He graduated from Yale Law School (Truman Scholar), Oxford University (Rhodes Scholar), and Williams College.

Past Event Wednesday, November 5, 2025
Time
12:30 PM - 1:30 PM ET
Location
1557 Massachusetts Ave.
Multipurpose Room, 5th Floor
Cambridge, MA 02138 US

You might also like


Events 03

Event
Nov 19, 2025 @ 12:30 PM

How to be the Superintelligence You’ve Been Waiting For

Fall Speaker Series

We alternately dread, worship and dismiss the arrival of Artificial Superintelligence (ASI). Yet the real superintelligences already surround us: religions, corporations, markets…

In-Person RSVP Zoom RSVP
Oct 22, 2025 @ 12:30 PM

Automating Content Policy

Fall Speaker Series

AI is no longer just moderating individual posts — it is learning how to interpret and enforce policy itself. Dave Willner — who has led trust and safety teams at Facebook, Airbnb…

Event
Oct 23, 2025 @ 4:00 PM

Friend, Flatterer, or Foe? The Psychology and Liability of Chatbots

PUBLIC EVENT

As AI systems become more conversational, the lines between tool, companion, and manipulator are blurring. What happens when machines start telling us what we want to hear—and…