Skip to the main content
Fairness and AI

Fairness and AI

Sandra Wachter on why fairness cannot be automated

How European Union non-discrimination laws are interpreted and enforced vary by context and by state definitions of key terms, like “gender” or “religion.” Non-discrimination laws become even more challenging to apply when discrimination — either direct or indirect discrimination — stems not from an individual or an organization but from algorithms’ training data. In some cases, for instance, people may not be aware of the discrimination because of the “black box” algorithms.

Sandra Wachter, a Faculty Associate at the Berkman Klein Center, Visiting Professor at Harvard Law School and Associate Professor and Senior Research Fellow in Law and Ethics of AI, Big Data, Robotics and Internet Regulation at the Oxford Internet Institute (OII) at the University of Oxford, joined BKC’s virtual luncheon series to discuss these issues and why fairness cannot be automated.

Read about the talk, and Wachter's answers to questions raised by participants, on the BKC Medium Collection.

This event is part of the AI Policy Practice.

You might also like