AI: Autonomous Vehicles
Americans spend over 290 hours per year driving, making it one of the most common (and dangerous) points of human-machine interaction. As vehicles are increasingly automated, it becomes not only imperative to improve these human-machine interactions, but also to anticipate what this means for the future of labor, explore how these vehicles will push the limits of existing governance frameworks as they drive across geographic boundaries, understand how autonomous vehicles may reinforce existing biases through the inability to drive through unmapped areas such as the poorest neighborhoods in the Global South, and identify the necessary forms of transparency needed to build new accident liability regimes. The challenges are clear, but the solutions have to date proven elusive due to a lack of coordination across the numerous stakeholders.
The Berkman Klein Center and the MIT Media Lab are working with automotive companies, regulators, engineers, ethicists, and civil society organizations to collectively develop solutions to these challenges. Collaborating with the Institute for Advanced Study in Toulouse (IAST), we have recently convened a Symposium on Trust and Ethics of Autonomous Vehicles (STEAV) that brought together scientists, engineers, lawmakers, and car manufacturers to discuss ethical considerations and trust in autonomous vehicles and make progress towards answering the hard questions outline above. Additionally, the Cyberlaw Clinic at Harvard Law School is working with law students to provide AI-related legal guidance to nonprofits and public interest-oriented startups facing unique AI legal challenges, such as those associated with navigating the complex regulatory and unusual liability risks that accompany autonomous vehicles.