Skip to the main content
In AI We Trust?

Urs Gasser shares some thoughts on how we learn to trust new technologies in a time of rapid change:

We are witnessing a wave of AI-based technologies that make their way out of the labs into industry- and user-facing applications, and we know from history that trust is an important factor that shapes the adoption of new technology. Given today’s quicksilver AI environment, it seems fair to ask: Do we already have the necessary trust in AI, and if not, how do we create it?

Read Urs Gasser's Medium post

You might also like


Projects & Tools 01

Past

AI: Transparency and Explainability

There are many ways to hold AI systems accountable. We focus on issues related to obtaining human-intelligible and human-actionable information.


Publications 01

Publication
Nov 20, 2017

A Layered Model for AI Governance

​AI-based systems are “black boxes,” resulting in massive information asymmetries between the developers of such systems and consumers and policymakers.