Skip to the main content

What are the OECD Principles on AI?

On May 22, 2019, 42 countries adopted the OECD’s principles on AI. In order to help develop these principles, the OECD convened an expert group on AI governance to develop a set of recommendations. Ryan Budish, Assistant Director for Research at Berkman Klein, and Urs Gasser, Executive Director, participated in this expert group alongside representatives of 20 governments as well as leaders from the business, labour, civil society, academic, and science communities. The expert groups’ proposals were reviewed by the OECD and developed into the OECD AI Principles ultimately adopted in May.

The OECD AI Principles identify five complementary values-based concepts for the responsible stewardship of trustworthy AI:

  1. AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.

  2. AI systems should be designed in a way that respects the rule of law, human rights, democratic values, and diversity, and implement appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.

  3. There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.

  4. AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.

  5. Organisations and individuals developing, deploying, or operating AI systems should be held accountable for their proper functioning in line with the above principles.

Learn more at the OECD

You might also like