Responsible Generative AI: Accountable Technical Oversight
Generative AI is at a tipping point of adoption and impact, much like that of machine learning and AI years ago. But this time the stakes, regulatory environment, potential social, political, and economic effects of these technologies, and the exponential pace of adoption and evolution are different. Now is the time to ask: How should regulators and companies enable meaningful transparency from generative AI technologies to ensure accountability to society?
Drawing on years of the center and community’s work on the governance of AI technologies, the Berkman Klein Center is exploring mechanisms for enabling accountable technical oversight of generative AI. Critical topics for generative AI to be explored include: new developments in harms and their impacts, balancing transparency and security in open research, and how to enable meaningful technical oversight within the nascent regulatory landscape. This work will surface and synthesize key themes and questions that regulators and independent technical auditors should understand and be prepared to address.
Why now?
First, these new technologies have enormous power to both magnify existing harms and create new ones in ways we cannot yet imagine. For example, generative AI can help produce effective materials for propaganda campaigns at a much faster scale and higher level of accuracy than was previously possible. And, models often return confidently incorrect answers, even citing “hallucinated” papers or other reference materials that exist without empirical guardrails. There is legitimate concern about the social fabric of trustworthiness and democracy as models are incorporated into search engines, customer support, and other critical tools used by the public.
Yet, these technologies are not without potential. There’s a promising early focus on improving accessibility technologies by using image-to-text generation. The potential for personalized education could be transformative. And the use of AI in improving design efficiencies could help sustainability efforts. So how can we capture these benefits but mitigate against harm? What governance systems are needed? What technical capabilities need to be developed for those governance systems to ensure the technology is accountable to societal needs? How can we protect both the users at the end of these AI systems as well as the workers tasked with making these systems safe?
Second, the pace of growth and evolution of these models are rapid. In releasing new models, companies are blurring the line of ‘research’ and ‘product’ while struggling with balancing transparency and security. Given that these models are trained on public data, there are calls to operate them both openly and as a public resource. Yet, models that are open source – while aspirational – may reduce abilities to curb malicious actors. To enable responsible transparency, models of ‘open research’ must contend with concerns for malicious use.
Third, lawmakers are grappling with regulating these systems in a nascent AI regulatory environment. Many nascent and possible regulations rely on technical auditing and data access as a key transparency mechanism to ensure accountability. Yet, for these mechanisms to be an effective accountability tool, we need both a clearer understanding of what (if anything) has changed with the leaps forward in generative AI, where the tradeoffs are between transparency and security, and what technical capabilities are needed outside of industry to implement effective technical oversight.
The Berkman Klein Center is generating new insights, convening experts, and engaging policymakers with evidence-based solutions in the midst of this rapid advancement.