Join Bruce Schneier, security researcher and affiliate at the Berkman-Klein Center for Internet and Society at Harvard University, for a conversation on balancing security and transparency in open research on generative AI, moderated by Sue Hendrickson, Executive Director of the Berkman Klein Center.
In releasing new generative AI models, companies are blurring the line of ‘research’ and ‘product’ while struggling with balancing transparency and security. Given that these models are trained on public data, there are calls to operate them both openly and as a public resource. Yet, models that are open source – while aspirational – may reduce abilities to curb malicious actors. To enable responsible transparency, models of ‘open research’ must contend with security concerns. AI research organizations and companies are balancing transparency with considerations of safety and risk differently, resulting in a variety of approaches concerning access to models, training data, and other components.
This conversation asks: How should companies consider the tradeoffs between transparency and security in releasing their models and underlying training information? How should model builders and stakeholders balance the societal need to better understand these technologies versus the security risks that might come from sharing training data or code? How do the risks shift as model availability becomes more decentralized?
Sue Hendrickson, the Executive Director of the Berkman Klein Center, is a leading technology and intellectual property legal and policy strategist focused on cutting-edge technology and innovation. Her experience with complex legal, commercial, and public policy issues spans three decades of technology expansion, and she has forged effective interdisciplinary and global alliances enabling leading international and civil society organizations, technology companies, investors, and philanthropists to embrace the promise and mitigate the risks of emerging technologies, from the early days of AOL to today’s advanced AI technologies.
Bruce Schneier is an internationally renowned security technologist, called a “security guru” by the Economist. He is the New York Times best-selling author of 14 books -- including Click Here to Kill Everybody -- as well as hundreds of articles, essays, and academic papers. His influential newsletter Crypto-Gram and blog Schneier on Security are read by over 250,000 people. Schneier is a fellow at the Berkman-Klein Center for Internet and Society at Harvard University, a Lecturer in Public Policy at the Harvard Kennedy School, a board member of the Electronic Frontier Foundation and AccessNow, and an advisory board member of EPIC and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.