Human error causes at least 90 percent of the 5.25 million accidents in the United States annually. Could driverless cars save lives? Yes, but it may take a long road to get there.
“Autonomous vehicles (AVs) are never drunk or tired or inattentive,” says Harvard Business School Assistant Professor Julian De Freitas. “We expect that they will make roads truly much safer.”
In fact, De Freitas and colleagues recently argued in the Proceedings of the National Academy of Sciences that adopting driverless vehicles on a broad scale could improve global health on a level equivalent to the introduction of penicillin or vaccines. “So many people are injured and dying on the roads every year, and we are allowing that to happen,” says De Freitas. “That’s an ethical choice.”
"People want to be able to say, it’s behaving in a way that coheres with my understanding of what common sense driving looks like."
Yet as car manufacturers, tech companies, and ride hailing services move forward with plans for deploying AVs, widespread deployment has been slow, primarily because AVs face a huge challenge: Many people are uneasy about relinquishing control to machines and their preprogrammed agendas. Consider, for example, the outrage Mercedes caused five years ago when, in an effort to allay the fears of its drivers, it announced that its AVs would be programmed to protect the lives of their occupants, even if it meant sacrificing any pedestrians in their path.
The complex driving calculations these machines have to make in the spur of the moment are at the heart of the discomfort consumers feel about autonomous cars, says De Freitas. It’s akin to the famous philosophical “trolley problem” in which a subject must decide to let a runaway trolley kill five people or flip a switch and choose to kill only one.
While such damned-if-you-do-damned-if-you-don’t ethical dilemmas have traditionally been the focus of public debates about AV algorithm design, they are the wrong way to look at the issue, say De Freitas and his co-authors, who include Perceptive Automata CEO Sam Anthony and Luigi Di Lillo of Swiss Reinsurance Company.
To get consumers to sign onto the technology, the industry must instill trust by focusing on programming the cars to behave with the “common sense” human drivers tend to apply when navigating complex or dangerous driving moments, the research team concludes.
“People want to be able to look at a vehicle and say, it’s behaving in a way that coheres with my understanding of what common sense driving looks like,” De Freitas says.
Can AVs bend road rules?
Making a car mimic the best of human behavior, he and his colleagues propose, depends less on how a vehicle applies road rules to extreme “driverless dilemmas,” and more on how it can make decisions in everyday cases that lie on the edge of hard-and-fast driving rules.
“Human drivers take a test, and we then assume that they are going to use our shared human psychology to reasonably deal with what they encounter beyond the test,” De Freitas explains. “But we can’t just make the same assumptions about AVs because common sense is part of what’s being engineered into their systems in the first place.”
"The solution is not to make AVs quickly accelerate through a crosswalk, as humans often do."
Getting AVs to drive with common sense may partly entail making them aware of what humans expect of other drivers. For example, an AV that slows down near crosswalks could lead a pedestrian to infer that the AV has detected her, even when it hasn’t. Making the AV safer may require mimicking some aspects of human driving but not others, such as sending a more explicit signal that the AV is about to stop.
“The solution is not to make AVs quickly accelerate through a crosswalk, as humans often do,” says De Freitas.
More broadly, although humans are the best-known drivers, we should not blindly treat them as the gold standard, he says. After all, humans often drive unsafely: They may drive a little over the speed limit, fail to come to a complete stop at a stop sign, or speed through a crosswalk instead of stopping when a pedestrian is just starting to step into the road.
The question, De Freitas says, should not be: Should AVs acclimatize to humans or vice versa? Instead, we should be asking: What system is the safest?
“In some cases, we may need to reconceive entire driving systems rather than just acclimatize drivers. AVs should push everyone to improve,” he says.
Tests and more tests
Training the cars to handle a wide variety of driving scenarios and testing them both in simulation and on real roads is key, De Freitas says. AVs, for example, should be able to handle possible situations like navigating around large debris in the road, pulling over to make way for emergency vehicles, and figuring out what to do at an intersection if a traffic light fails.
In testing new AVs, De Freitas and his colleagues propose that companies should adopt a standard they refer to as SPRUCE—teaching AVs to drive in a manner that is:
- Safe—“does not harm others or put others at unreasonable risk of harm;”
- Predictable—"AV’s maneuvers can be anticipated from past behavior;”
- Reasonable—"does not offend notions of logic or justice;”
- Uniform—“treats seemingly like situations alike;”
- Comfortable—“physically and psychologically smooth;” and
- Explainable—“fits in an accessible narrative of cause and effect.”
For instance, while an AV might find a new way to navigate around an obstacle extremely efficiently, it may be safer overall if it sacrifices some of this efficiency in favor of a more predictable maneuver that better fits what other drivers understand and expect.
“We also shouldn’t assume there’s always a binary right or wrong answer,” says De Freitas, who has been studying the interplay between moral judgment and attention to one’s surroundings for the past seven years. “It’s more like a hierarchy of preferences. For instance, you might put safety first, following road rules second, and helping the flow of traffic third. But you also need to know when to violate these rules and by how much in order to drive in ways that are more practical.”
As an example, if a truck is parked in the middle of the road for long enough, the AV may need to briefly and safely violate a road rule like crossing a solid white line in order to maintain the flow of traffic.
Transparency will build trust
Ultimately, getting to commonsense AVs will involve more of a gradual process than a single testing event. Even after an AV is deployed, its software can be continuously improved via over-the-air updates. The process will also include adjusting to, and even setting, consumer expectations as people get more used to seeing AVs on the road, and start adjusting their own driving habits to anticipate how these driverless vehicles will react.
“It’s not like you can program a vehicle with an ideal set of instructions that is going to solve every problem,” says De Freitas, who joined the Marketing Unit at HBS in July after earning his doctorate in psychology at Harvard University. “It’s a more gradual process of integrating into this commonsense driving world.”
Beyond testing, it’s also important for AV manufacturers to be up front about any safety challenges they face and be transparent about what isn’t working.
“If companies can legitimately convince consumers that they are transparent, vigilant, and continuously improving, then consumers will reasonably trust them,” the researchers write.
About the Author
Michael Blanding is a writer based in the Boston area.
[Image: Pexels/Taras Makarenko]
Do you think that autonomous vehicles will make roads safer? Why or why not?
Share your thoughts in the comments below.