Prof. Dr. Sandra Wachter is Professor of Technology and Regulation at the University of Oxford at the Oxford Internet Institute. At the Oxford Internet Institute, Professor Sandra Wachter leads and coordinates the Governance of Emerging Technologies (GET) Research Programme that investigates the legal and ethical implications of AI, Big Data, and robotics as well as Internet and platform regulation. Professor Wachter also serves as a policy advisor for governments, companies, and NGOs around the world on regulatory and ethical questions concerning emerging technologies.
Professor Wachter publishes widely in top journals, including Science and Nature and focuses on legal, ethical, and technical aspects of AI and inferential analytics, explainable AI, algorithmic bias, platform regulation, profiling, as well as emotion- and facial-recognition software. The societal impact of generative AI and hallucinations in areas such as the future of work, misinformation, free press, and human rights is also at the heart of her research agenda. Professor Wachter is an affiliate and member at numerous institutions, such as the Berkman Klein Center for Internet & Society at Harvard University, World Economic Forum’s Global Futures Council on Values, Ethics and Innovation, UNESCO, European Parliament Working Group on AI Liability, Law Committee of the IEEE, World Bank’s Task Force on Access to Justice and Technology, United Kingdom Police Ethics Guidance Group, British Standards Institution, Law Faculty at Oxford, Bonavero Institute of Human Rights, Oxford Martin School and the Oxford University Press. Previously, Professor Wachter was a visiting Professor at Harvard Law School. Prior to joining the OII she studied at the University of Oxford and the Law Faculty at the University of Vienna. She has also worked at the Alan Turing Institute, Royal Academy of Engineering and the Austrian Ministry of Health.
Professor Wachter has been the subject of numerous media profiles, including by the Financial Times, Wired, Nature, TechCrunch, der Spiegel, and Business Insider. Her work has been prominently featured in several documentaries, including pieces by Wired , Reuters, and the BBC, and has been extensively covered by The New York Times, Time Magazine, Reuters, Forbes, Fortune, CNN, Harvard Business Review, Guardian, BBC, Telegraph, CNBC, CBC, Huffington Post, Washington Post, Science, Nature, MIT Tech Review, New Scientist, HBO, The Sunday Times, and Vice Magazine.
Professor Wachter has received numerous awards, including the Alexander von Humboldt Foundation Research Award (2025) which grants EUR €3,5 Mio in funding, the O2RB Excellence in Impact Award (2018 and 2021), the Computer Weekly Women in UK Tech award (2021), the Privacy Law Scholar (PLSC) Award (2019) for her paper A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI, and the CognitionX Outstanding Achievements & Research Contributions – AI Ethics Award X (2017 and 2023) for her contributions in AI governance. Professor Wachter’s work on opening the ‘AI Blackbox’ to increase accountability, transparency, and explainability is widely successful.
Her explainability tool – Counterfactual Explanations – has been implemented by major tech companies such as Google, Accenture, IBM, Microsoft, Arthur and Vodafone. Professor Wachter’s work to combat bias has shown that the majority (13/20) of popular bias tests and tools do not live up to the standards of EU non-discrimination law. In response she developed a bias test (‘Conditional Demographic Disparity’ or CDD) that meets EU and UK standards. Amazon and IBM picked up her work and implemented it in their cloud services. In 2024, CDD was used to uncover systemic bias in education in the Netherlands. The Dutch Minister for Education, Culture and Science apologised for indirect discrimination and is now working to improve the algorithmic system in question.
Wachter’s paper "The Unfairness of Fair Machine Learning" revealed the harmful impact of enforcing many ‘group fairness’ measures in practice by making everyone worse off, rather than helping disadvantaged groups. The NHS and the Medicines and Healthcare products Regulatory Agency (MHRA) is now using these findings internally to revise practices for licensing medical devices to ensure equal and safe access to medical care. Her work on generative AI and hallucinations explores if LLMs have a legal duty to tell the truth and focuses on tools to reduce hallucinations, inaccurate and harmful outputs or what she termed ‘careless speech’ to prevent the erosion of knowledge, facts, and shared history and to curb misinformation.