JoinDr. Rumman Chowdhury, Responsible AI Fellow at the Berkman Klein Center, and Reva Schwartz, a research scientist at the National Institute of Standards and Technology (NIST) and the Principal Investigator on Bias in Artificial Intelligence for NIST’s Trustworthy and Responsible AI program, for a discussion on the changing landscape of harms due to generative AI.
Generative AI is at a tipping point of adoption and impact, much like that of machine learning and AI years ago. But this time the stakes, regulatory environment, potential social, political, and economic effects of these technologies, and the exponential pace of adoption and evolution are different. There is legitimate concern about the social fabric of trustworthiness and democracy as models are incorporated into search engines, customer support, and other critical tools used by the public. Yet, these technologies also offer tremendous potential. In the United States, the Information Technology Laboratory (ITL) at NIST has led the development of the AI Risk Management Framework, a voluntary resource for organizations designing or using AI systems to manage risks and promote trustworthy and responsible AI.
Dr. Rumman Chowdhury and Reva Schwartz will discuss what, if anything, has changed about known algorithmic harms – such as bias and discrimination by algorithms, the creation of mis-and disinformation, and labor automation – with generative AI. Has generative AI introduced any new harms? How might policymakers consider and address these harms?
Dr. Rumman Chowdhury is a Responsible AI Fellow at the Berkman Klein Center for Internet & Society at Harvard University and currently runs Parity Consulting and the Parity Responsible Innovation Fund. She is also a Research Affiliate at the Minderoo Center for Democracy and Technology at Cambridge University and a visiting researcher at the NYU Tandon School of Engineering. Previously, Dr. Chowdhury was the Director of META (ML Ethics, Transparency, and Accountability) team at Twitter, leading a team of applied researchers and engineers to identify and mitigate algorithmic harms on the platform.
Reva Schwartz is a research scientist in the Information Technology Laboratory (ITL) at the National Institute of Standards and Technology (NIST) where she serves as Principal Investigator on Bias in Artificial Intelligence for NIST’s Trustworthy and Responsible AI program. Her research focuses on evaluating AI system trustworthiness, studying AI system impacts, and driving understanding of socio-technical systems within computational environments. She has advised federal agencies about how experts interact with automation to make sense of information in high-stakes settings.