Skip to the main content
Having our cake and eating it too

Having our cake and eating it too

How to develop AI competitively without falling victim to collective action problems

It has been argued that competitive pressures could cause AI developers to cut corners on the safety of their systems. If this is true, however, why don't we see this dynamic play out more often in other private markets?

In this talk Amanda Askell - research scientist in ethics and policy at OpenAIoutlines the standard incentives to produce safe products: market incentives, liability law, and regulation. Askell argues that if these incentives are too weak because of information asymmetries or other factors, competitive pressure could cause firms to invest in safety below a level that is socially optimal.

In such circumstances, responsible AI development is a kind of collective action problem. Askell develops a conceptual framework to help identify levers to improve the prospects for cooperation in this kind of collective action problem.

Lunch will be served.

Presenter Bio

Amanda Askell is a research scientist in ethics and policy at OpenAI, where she works on topics like responsible AI development, cooperation, and safety via debate. Before joining OpenAI she completed a PhD at NYU with a thesis on infinite ethics. She also holds a BPhil in philosophy from the University of Oxford and has written articles for Vox, The Guardian, and Quartz.

This event is supported by the Ethics and Governance of Artificial Intelligence Initiative at the Berkman Klein Center for Internet & Society. In conjunction with the MIT Media Lab, the Initiative is developing activities, research, and tools to ensure that fast-advancing AI serves the public good. Learn more.
Past Event
Apr 29, 2019
Time
12:00 PM - 1:00 PM ET
Location
Harvard Law School, Wasserstein Hall Milstein East C (Second Floor)
Cambridge, MA 02138 US

You might also like