Government institutions around the globe are beginning to explore decision automation in a variety of contexts, from determining eligibility for services; to evaluating where to deploy health inspectors and law enforcement personnel; to defining boundaries around voting districts. Use cases for technologies that incorporate AI or machine learning will expand as governments and companies amass larger quantities of data and analytical tools become more powerful.
Our work on algorithms and justice (a) explores ways in which government institutions incorporate artificial intelligence, algorithms, and machine learning technologies into their decisionmaking; and (b) in collaboration with the Global Governance track, examines ways in which development and deployment of these technologies by both public and private actors impacts the rights of individuals and efforts to achieve social justice. Our aim is to help companies that create such tools, state actors that procure and deploy them, and citizens they impact to understand how those tools work. We seek to ensure that algorithmic applications are developed and used with an eye toward improving fairness and efficacy without sacrificing values of accountability and transparency.
We engage directly with government officials (including through the Center’s AGTech Forum) to facilitate learning and idea-sharing around emerging AI issues. We are developing a database of currently in-use risk assessment tools to enable comparisons and empower stakeholders to make informed decisions. We write and publish on topics ranging from interpretability and explainability of algorithmic decisions, to human rights implications of artificial intelligence, to best practices for governments considering adoption of these technologies. And, we build courses, education materials, clinical projects, and teaching curricula to help train future generations of lawyers to respond to the legal, policy, regulatory, ethical, and social challenges presented by AI.