Skip to the main content
Project

Artificial Intelligence and the Law

The Initiative on Artificial Intelligence and the Law (IAIL) is a Harvard Law School initiative based at the Berkman Klein Center. Directed by Oren Bar-Gill and Cass Sunstein, the initiative focuses on new challenges and opportunities for the law created by the rise of artificial intelligence.

While AI can make enforcement and adjudication more effective, potentially reduce discrimination, and make the drafting of contracts, briefs, laws, regulations, and court opinions faster and less costly, it also has serious implications for broad societal issues such as consumer protection; investor protection; false advertising; privacy; misinformation; and discrimination and civil rights.

The initiative will sponsor and promote new work on these topics by both faculty and students, hold conferences and symposia, and issue preliminary reports on emerging topics. A book by Bar-Gill and Sunstein on algorithms and consumer protection, developed at Harvard Law School, is slated to be one of the early products.

Advisory Board

The IAIL is overseen by an advisory board consisting of law school faculty including Chris BavitzYochai Benkler, John Coates, Benjamin Eidelson, Noah Feldman, Lawrence Lessig, Martha Minow, Ruth OkedijiHolger Spamann, David Wilkins, Crystal Yang, and Jonathan Zittrain.

Mailing List

Subscribe to the IAIL mailing list to receive updates about the initiative.
 

You can change your mind at any time by clicking the unsubscribe link in the footer of any email you receive from us, or by contacting us at iail-announce-request@eon.law.harvard.edu. We will treat your information with respect. For more information about our privacy practices please visit our website. By clicking below, you agree that we may process your information in accordance with these terms.

Faculty Projects

Bar-Gill, Oren and Sunstein, Cass: Algorithmic Harm in Consumer Markets

Sellers and service providers are increasingly using machine learning algorithms to set prices for and target products to individual consumers. While the use of algorithms can benefit consumers in some cases, it might harm them in others. We identify the conditions for algorithmic harm in consumer markets and explore legal policy responses that could mitigate such harm.

Bavitz, Chris, Risk Assessment Tools Database

This project, at the Berkman Klein Center, tracks use of actuarial risk assessment tools in the criminal legal system in the United States (including in the context of pretrial release decisions).

Bavitz, Chris, Promises and Perils and of Artificial Intelligence for Creators

This project explores the promises and perils and of artificial intelligence (in particularly, generative AI tools) for creators, including impacts on the value of creative labor.  

Lessig, Lawrence, AGI Circuit Breakers

How should we regulate to protect against misaligned AGI, and assure we retain control if capabilities exceed expectations.

Now hiring a Research Assistant. Learn more and apply today.

Minow, Martha and Lévesque, Maroussia, Artificial Intelligence in the Judiciary

AI systems are increasingly used in adjudication and legal practice. We combine legal and technical expertise to map substantive, procedural and institutional implications and to suggest appropriate policy responses with a particular focus on uses of and policies about AI systems in courts. 

Spamann, Holger, Automated Detection of Judicial Reasons
Co-author: Stammbach, Dominik

We automate the detection of reason types in judicial opinions by a combination of rules and the training of classifiers. This will allow the comparison of reasoning across time, space, and the judicial hierarchy. For now, we are working with English and German decisions; eventually, we want to expand to other jurisdictions including, of course, the U.S.

Spamann, Holger, Legal Text Deduplication / Simplification
Co-authors: Stuart Shieber, Elena Glassman, Corinna Coupette, Mirac Suzgu

Statutory language is often highly duplicative. This means the information density is low, but perhaps more importantly, it makes it hard for humans to comprehend the statute's text. We develop algorithms and GUIs that can shorten statutory text and improve comprehensibility with the help of humans-in-the-loop.

Yang, Crystal: Why Humans Override Algorithms

We are interested in understanding why humans override algorithmic recommendations and the welfare consequences of these overrides. We study patterns of overrides empirically and also plan to utilize surveys and qualitative interviews to better understand this phenomenon.

Zittrain, Jonathan, Adversarial Attacks on Medical Machine Learning
Co-authors: Samuel G. Finlayson, John D. Bowers, Joichi Ito, Andrew L. Beam, and Isaac S. Kohane

With public and academic attention increasingly focused on the role of machine learning in the health information economy, mastering these systems’ vulnerabilities becomes more critical. Even a small, carefully designed change in how inputs are presented to a system can completely alter its output, leading to wrong conclusions and dire consequences. These so-called adversarial attacks have expanded from a niche topic investigated by computer scientists to a conversation effecting the national healthcare system and with potential financial, legal, technical, and ethical implications. Far from discouraging continued innovation with medical machine learning, we call for experts across industries in pursuit of efficient, broadly available, and effective health care that machine learning will enable.

Zittrain, Jonathan, Intellectual Debt

Machine learning models the concept of “intellectual debt,” wherein we gain insights we can use immediately, and promise we’ll figure out the details later. Know now, understand tomorrow. Sometimes we pay off the debt quickly; sometimes, as with how aspirin lowers a fever, it takes a century; and sometimes we never pay it off at all. Loans can offer great leverage, be they money or ideas. Indebtedness also carries risks. For intellectual debt, these risks can be quite profound, both because we are borrowing as a society, and because new technologies of artificial intelligence – specifically, machine learning – are bringing the old model of drug discovery to a seemingly unlimited number of new areas of inquiry. Humanity’s intellectual credit line is undergoing an extraordinary, unasked-for bump up in its limit.

Prior Work

Victoria Angelova, Will Dobbie, Crystal Yang, Algorithmic Recommendations and Human Discretion (2022).

Oren Bar-Gill, Cass Sunstein, Inbal Talgam-Cohen, Algorithmic Harm in Consumer Markets, Harvard Public Law Working Paper No. 23-05 (2023).

Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, Cass Sunstein, Discrimination in the Age of Algorithms, Journal of Legal Analysis (2018).

Lawrence Lessig, The First Amendment Does Not Protect Replicants, Harvard Public Law Working Paper No. 21-34 (2021).

Cass Sunstein, The use of algorithms in society, Review of Austrian Economics (2023).

Crystal Yang, Will Dobbie, Equal Protection Under Algorithms: A New Statistical and Legal Framework, Michigan Law Review (2020).


Our Work 03

Nov 1, 2023 @ 9:30 AM

Regulating AI: A Sisyphean Task?

A Breakfast Talk @ the Berkman Klein Center

Join us on November 1st for a breakfast talk about the future of AI regulation from a transatlantic perspective at the Berkman Klein Center for Internet and…

Oct 17, 2023 @ 12:00 PM

Artificial Intelligence and the Law

Lightning Talks with Experts

Join us for a series of lightning talks to kick off of the Initiative on Artificial Intelligence and the Law...

News
Jul 17, 2023

Harvard Law School and Berkman Klein Center Announce New Initiative on Artificial Intelligence and the Law

Directed by Oren Bar-Gill and Cass Sunstein, the initiative will focus on new challenges and opportunities for the law created by the rise of AI...


People 08

Team

Oren Bar-Gill

Faculty Associate

Cass Sunstein

Faculty Associate

Holger Spamann

Faculty Associate

Crystal Yang

Faculty Associate

Larry Lessig

Faculty Associate


Related Projects & Tools 01

Responsible Generative AI: Accountable Technical Oversight

Generative AI is at a tipping point of adoption and impact, much like that of machine learning and AI years ago. But this time the stakes, regulatory environment, potential social…