Skip to the main content
Clearly Complex

Clearly Complex

A report from the Symposium on Trust and Ethics of Autonomous Vehicles (STEAV)

Sometimes clarity means embracing complexity. That seemed to be one of the overall implications of the Symposium on Trust and Ethics of Autonomous Vehicles (STEAV) convened at Harvard and the Massachusetts Institute of Technology. The second overall implication makes the first more urgent: autonomous vehicles — self-driving cars and trucks — are going to transform just about every aspect of our world.

The symposium at the end of May 2018 brought together more than seventy experts from around the world in fields as diverse as computer science, law and policy, philosophical ethics, city planning, and the insurance industry, to think about how we can shape the deployment of autonomous vehicles (AVs) so that our world is made more livable and fair. The event, kicked off by MIT Media Lab Director Joi Ito, was organized by Harvard’s Berkman Klein Center for Internet & Society, the MIT Media Lab, and the Institute for Advanced Study in Toulouse, as part of the Ethics and Governance of AI project.

Over the course of one and a half days even relatively simple issues turned out to be not just complicated, but shot through with uncertainties. This is true far beyond the canonical example, the Trolley Problem, which, according to an initial presentation by the Media Lab’s Iyad Rahwan, generates very different results across different cultures, taking the forty million inputs to the Moral Machine site as evidence. Later in the conference, Patrick Lin, a philosopher at Cal Poly, offered a robust defense of the Trolley Problem as a thought experiment that is much like a lab experiment: a constrained, simplified case that enables us to see the effects of the variables. It is an artificial simplification that nevertheless gives rise to endless discussion. (More from Patrick here.)

Indeed, there seemed to be no simple answers to any of the many questions around the role and governance of AVs. For example, it’s not only that we don’t agree about how safe AVs should be before we permit their deployment, but “The more precisely we want to know how safe a technology is, the more risk we have to accept in deploying it,” in the words of Laura Fraade-Blanar, from the RAND Corporation. It would take five billion miles of road testing to give us statistical confidence that AVs could have a twenty percent lower fatal crash rate compared to human driven vehicles; Google’s experimental AVs have logged one-thousandth of that as of February 2018. Driving five thousand AVs twelve thousand miles each, for a total of sixty million miles, could give us statistically significant information about whether the rate of crashes for AVs was at least ten percent lower than for human driven vehicles, but would yield no significant insights about the difference in fatalities, unless that difference turns out to be over eighty percent.

But of course it is yet more complex. To compare the risks and benefits posed by AVs vs. traditional cars, it would be helpful to have equivalent data. But AVs record just about everything, generating up to a gigabyte of data per minute, so we’ll know every time they run over a curb or swerve to avoid an obstacle. But humans usually don’t report such small incidents or even fender-benders too small to be worth alerting our insurance companies about. And then what do we compare AVs to? To all cars on the road, including thirty year old Corvairs that are on their last legs (or wheels), or to new cars in the AV’s price range?

So, the the comparative safety of AVs and traditional cars cannot be resolved without making non-trivial and non-obvious decisions about what we want to count as safe. Without that information, policy-makers and regulators will have difficulty knowing when to phase in AVs and how to assess their performance

Then those decisions will have to be sold to citizens. According to data presented in a session, forty-seven percent of United States residents have already decided that AVs will not be safe. Chaiwoo Lee of the MIT Age Labreported that men are more likely to accept AVs as a safe form of transportation, while the elderly are less comfortable with cars that do all the driving themselves — although the correlation with age may be indirect. She also noted that the acceptance of AVs may well depend on whether they will be on our streets as vehicles owned by individuals or as a fleet of cars that one pays to ride, as with Uber or Lyft. She also reminded the symposium how drastically the public’s perception of risk can be altered by events: 40,000people died in vehicle-related accidents in 2017 in the U.S., but a single highly-publicized death in an AV could turn the public against the program, perhaps in part because, as one of the panelists said, quoting Kate Darling: “…people will feel far more emotional about an accident caused by a machine rather than a person.”

While avoiding fatalities was at the top of the list for everyone who raised her voice at the conference — this was, by the way, a fully gender-balanced event — that’s not the only value in the mix. For example, Noah Goodall of the Virginia Transportation Research Council suggested that a recent death of a bicyclist in a crash with an Uber AV in Arizona conceivably could have resulted from how the AV was designed to balance values: if an AV brakes for every false positive generated by its collision detection system, the ride will be needlessly jerky and uncomfortable — an incentive to trade some safety for comfort.

The conference also heard from Christoph Lütge from Technische Universität München about the process by which the German government produced a code of ethics for governing AVs [pdf] that is gaining traction internationally. The document provides a more complete ordering of values, including putting the prevention of harm to humans above protecting property and animals, valuing all human lives equally regardless of categories such as age and race, and possibly “socially and ethically” mandating the use of AVs if they turn out to save lives.

Many countries are paying attention to Germany’s code of AV ethics, but universal agreement is unlikely, if only because of the sorts of cultural differences the Moral Machine site has documented, such as attitudes towards the elderly and even toward cats and dogs. This led to frequent discussion among the participants about how nations and even cities might try to harmonize their varying AV ethical codes based upon their local values. “Are we going to have left-side and right-side rules of driving, except this time for ethics?” asked one participant.

Aida Joaquin Acosta, who is affiliated with the Berkman Klein Center and is working with the Spanish government on how to govern AVs, was more hopeful. She noted that government policies can shape technology, and described some of the serious and well-funded European initiatives underway that could bring broad agreement and harmonization of AV policies.

In a conference lively with debate and disagreement, everyone agreed on at least one point: AVs are going to have some downsides. Even if AVs entirely eliminated the 93 percent of accidents caused by human error, that would still leave seven percent of crashes. So, AVs are highly likely to be involved in fatal accidents. Further, a switch to AVs could throw millions of truck drivers and cab drivers out of work, with a disproportionate impact on the populations that traditionally have seized upon those employment opportunities. AVs may also seriously degrade city life (imagine driverless cars hurtling down your streets at 150mph), exacerbate the disparities between urban and rural populations, and magnify the inequality of access to transportation in poorer parts of a city. While many hope that electric AVs will reduce the environmental impact of transportation, some of the conference participants fear that easy access to extremely low cost travel could increase the miles driven beyond what the savings an all-electric system would bring. In fact, at least one participant believes that electric cars’ reliance on batteries simply shifts the release of carbon from the burning of gasoline to the mining of the minerals used in batteries.

Another liability was discussed extensively: privacy. Lauren Smith, policy counsel at the Future of Privacy Forum, says we should think about our cars as being more like smartphones than mechanical objects, and AVs even more so. Cars already collect lots of data about us, our vehicles, and our driving. A fleet of AVs may collect much more, since their managers will likely want to monitor the cars’ interiors for cleanliness, vandalism, and even bad smells. We’re likely to become accustomed to that since, as Smith pointed out, our expectations about privacy are already lower when we’re taking public transportation than when in our own car; many municipalities’ buses have surveillance cameras in them. But even privately owned AVs are unlikely to confine their data collection to the car or its driver. Information about passengers, pedestrians, road conditions and other vehicles on the road will also be input into these systems. In fact, some traditional cars are about to be given face recognition software that will warn drivers when the driver appears to be tired. About all of this Smith asks the crucial questions: Who will own or have access to this data? How should it be governed?

The potential inexplicability of the “decisions” AVs make also may vastly complicate the question of governance. These vehicles will be guided by the type of AI called machine learning the outputs of which are based on networks of data that may include tens of thousands of variables, each connected to many others, and each with its own weight that determines when it will be triggered. That is very different from the usual way of programming a computer by giving it a relatively simple model of how things interact, and providing rules for determining the outcomes. There may be decisions made by these systems that the human brain simply cannot follow, especially if AVs are enabled to form ad hoc networks with the other AVs on the road so that they can collaboratively come up with actions that maximize the benefits we’ve told them to prefer.

Sandra Wachter, a lawyer and Research Fellow at the Oxford Internet Institute, however, suggested a way to understand AI decisions without having to trace through a system’s millions of connections. Suppose you suspect gender may have played a role in a machine learning system’s rejection of your loan application. Rather than trying to pick apart the AI’s overwhelmingly complex network of variables — none of which may directly express your gender — Wachter suggests an investigator could multiply re-submit the application with various small changes to see which variables, if changed, would have swung the decision in your favor. Such a procedure could pinpoint the important factors that led the AI to reject your application. (See this paper for more information.)

No matter how the question of explicability is resolved, it will deeply affect the processes by which governments and regulators will come to decisions. Will the process become a “tyranny of statistics,” as one panelist suggested? Or, perhaps as an educator at the symposium suggested, ethical issues will come to the fore in our discussions. Jason Millar, a Stanford philosopher with an engineering background, is running “design thinking” workshops with engineering teams to help them learn how to create technology while keeping ethical outcomes in mind. This begins with reframing the discussion in terms of human values rather than ethical principles, and then thinking about how tech can best achieve those values.

That is an approach implicitly shared by Kristopher Carter at the Boston Mayor’s Office of New Urban Mechanics. He sees human values embedded in every aspect of city design. “We shape our streets. Thereafter, they shape us,” he said, playing on Winston Churchill’s quotation about buildings. For example, some Boston streets so favor vehicular traffic over pedestrians — overlong stoplights, poor placement of crosswalks — that they unintentionally encourage jaywalking. “How do we want AVs to prioritize pedestrians?” he asks. Then there are issues of fairness: Amazon’s machine learning algorithms determined that all of Boston would be eligible for same-day Prime delivery …except for the poorest part of town. Carter used this example to argue — not for the first time at the conference — that a purely results-based ethics is inadequate, for principles of fairness and equity should take precedence over utilitarian benefits.

All of these questions, and more, discussed at the symposium make governing AVs complicated, complex, and impossible to resolve perfectly. Yet some institution or confluence of social forces is going to have to provide the rules for the deployment of AVs and decide on the mechanisms of accountability for the harms AVs will inevitably bring about. Urs Gasser, Executive Director of the Berkman Klein Center, noted an overall difference in how European governments and the United States approach governance. The Europeans at the conference seemed generally to favor governments proactively promulgating ethical guidelines and policies, while Earl Weener, a member of the U.S. National Transportation Safety Board, took the successful governance of airline safety as a model: learn from forensic analyses of accidents and build a voluntary industry-government partnership along the lines of the Commercial Aviation Safety Team [pdf] that has proved so effective in making air travel safe.

Overall, STEAV showed the profundity of AI’s challenge to our settled ideas about accountability and morality. Conversations that traditionally were already difficult about the values we collectively want to support, the trade-offs we’re willing to make, and who we hold responsible for the inevitable failures, have now been complicated by the fact that we’re requiring automatons to act as proxies for our decision-making processes…and we may not always be able to understand how they came to their conclusions. This is requiring new thinking not just about what rules and regulations we want to impose on AVs but also about how we think about ethics and responsibility in the first place. For example, the University of Vienna’s Janina Loh analyzed the degree to which an AV could be held responsible for its acts by presenting a new model that sees a “network of responsibility” with roles for the agents, actors, and objects of actions. That the AI enabling AVs can lead us to revise the very framework of responsibility itself is an indication of the profundity of the change we are living through.

Perhaps issues such as the ones discussed at the symposium will prompt more explicit discussions about the need to balance our values — safety vs. speed vs. environmental impact, etc. — and to become more committed to values such as fairness and nondiscrimination that we are not willing to trade for purely pragmatic benefits. Perhaps conversations such as the ones begun at STEAV will lead to clear decisions about the values we want AVs and other AI-driven systems to support, and the most effective laws, rules, norms, and market incentives for achieving our vision.

No matter what, these issues are not going away. Perhaps the single takeaway from STEAV was that it’s better that the conversations continue to deepen than to resolve themselves too early.

 

The Symposium on Trust and Ethics of Autonomous Vehicles brought together more than seventy experts from around the world in fields as diverse as computer science, law and policy, philosophical ethics, city planning, and the insurance industry, to think about how we can shape the deployment of autonomous vehicles (AVs) so that our world is made more livable and fair. The event was organized by the Berkman Klein Center, the MIT Media Lab, and the Institute for Advanced Study in Toulouse, as part of the Ethics and Governance of AI project.

You might also like


Projects & Tools 01

Past

AI: Autonomous Vehicles

As vehicles become more automated vehicles, we consider potential impacts on labor, questions about governance, bias, and inequality, and work to identify forms of transparency.