Skip to the main content

Enabling Accountable Technical Oversight of Generative AI

JOIN US FOR  A VIRTUAL CONVERSATION WITH JULIA ANGWIN, BRANDON SILVERMAN, AND RUMMAN CHOWDHURY FOR THE THIRD IN A SERIES ON ACCOUNTABLE TECHNICAL OVERSIGHT OF GENERATIVE AI

 

Julia Angwin, investigative journalist and New York Times contributing Opinion writer, joins Brandon Silverman, policy expert on data sharing and transparency and founder of CrowdTangle, for a conversation on enabling effective and accountable technical oversight of generative AI, moderated by Dr. Rumman Chowdhury, Responsible AI Fellow at the Berkman Klein Center. 

Generative AI is at a tipping point of adoption and impact, much like that of machine learning and AI years ago. But this time the stakes, regulatory environment, potential social, political, and economic effects of these technologies, and the exponential pace of adoption and evolution are different. There is legitimate concern about the social fabric of trustworthiness and democracy as models are incorporated into search engines, customer support, and other critical tools used by the public. Yet, these technologies also offer tremendous potential.

Technical interventions, such as data access and external assessments and audits, are a critical part of enabling effective and accountable oversight of generative AI. However, the current conversation often focuses solely on passing laws and regulations. While regulatory efforts are important, other sectors have a critical role to play in holding generative AI systems and the actors that create or use them accountable. How might we consider governance of generative AI across a spectrum? 

This conversation explores how expertise and capacities across society already are being leveraged and might be amplified in the future to enable a variety of kinds of technical oversight, particularly highlighting how journalism and civil society contribute to oversight through investigative reporting and external assessments. 

This is the third in a series of virtual fireside chats exploring accountable technical oversight of generative AI. The first session was on “How is generative AI changing the landscape of AI harms?” and the second session on “Balancing Transparency and Security in Open Research on Generative AI | Berkman Klein Center." Additionally, learn more about the Berkman Klein Center’s project on Responsible Generative AI for Accountable Technical Oversight, which is hosting these events.

About the speakers:

Dr. Rumman Chowdhury is a Responsible AI Fellow at the Berkman Klein Center for Internet & Society at Harvard University and currently runs Parity Consulting and the Parity Responsible Innovation Fund. She is also a Research Affiliate at the Minderoo Center for Democracy and Technology at Cambridge University and a visiting researcher at the NYU Tandon School of Engineering. Previously, Dr. Chowdhury was the Director of META (ML Ethics, Transparency, and Accountability) team at Twitter, leading a team of applied researchers and engineers to identify and mitigate algorithmic harms on the platform.

Julia Angwin is an award-winning investigative journalist and New York Times contributing Opinion writer. She founded The Markup, a nonprofit newsroom that  investigates the impacts of technology on society, and is Entrepreneur in Residence at Columbia Journalism School’s Brown Institute. Julia was previously a senior reporter at the independent news organization ProPublica, where she led an investigative team that was a Finalist for a Pulitzer Prize in Explanatory Reporting in 2017 and won a Gerald Loeb Award in 2018. From 2000 to 2013, she was a reporter at The Wall Street Journal, where she led a privacy investigative team that was a Finalist for a Pulitzer Prize in Explanatory Reporting in 2011 and won a Gerald Loeb Award in 2010. In 2003, she was on a team of reporters at The Wall Street Journal that was awarded the Pulitzer Prize in Explanatory Reporting for coverage of corporate corruption. She is also the author of “Dragnet Nation: A Quest for Privacy, Security and Freedom in a World of Relentless Surveillance” (Times Books, 2014) and “Stealing MySpace: The Battle to Control the Most Popular Website in America” (Random House, March 2009).

Brandon Silverman is a policy expert on data sharing and transparency on social media, as well as an entrepreneur. He was formerly the CEO and Co-Founder of CrowdTangle, a social analytics tool that was acquired by Facebook in 2016 that the New York Times called "perhaps the greatest transparency tool in the history of social media". He left CrowdTangle in 2020 and now speaks frequently about the role that transparency can play in helping build a better internet, including testifying in the US Senate & Australian Parliament, advising the European Commission and a wide variety of NGOs & advocacy groups, and providing regular commentary for media outlets ranging from the New York Times to CNN to the BBC. He's currently a Founding Fellow at the Integrity Institute, the Knight Research Fellow at George Washington University's Institute for Data, Democracy and Politics, an advisor to New_Public and a Member of the U.S. Speaker Bureau. He lives in Oakland with his wife and three young kids.

Relevant links to work

Past Event
Monday, May 22, 2023
Time
12:45 PM - 1:30 PM ET
Location
Berkman Klein Center for Internet & Society
VIRTUAL
Cambridge, MA 02138 US

You might also like


Projects & Tools 01

Responsible Generative AI: Accountable Technical Oversight

Generative AI is at a tipping point of adoption and impact, much like that of machine learning and AI years ago. But this time the stakes, regulatory environment, potential social…