Will Self-Driving Cars Feel Safe?

As self-driving cars begin to share the road with traditional cars a critical question emerges: “How safe is safe enough?” Often this question is framed in terms of objective estimates of risk based on eliminating crashes that people experience today. An equally important question may be: “Will people feel safer?” Objective demonstrations of safety based on current crash types are insufficient because:

  • Automation doesn’t simply replace people and eliminate human error, but changes their role, and creates new failure modes
  • Even if self-driving cars are substantially safer than human-driven cars, people might feel less safe
  • Trust in self-driving cars depends on more than statistics

Automation changes, but does not eliminate human error

Rapidly advancing vehicle automation promises substantial safety benefits. The often-cited statistic that human error contributes to 94% of crashes suggests that replacing the driver with automation will eliminate these errors and make driving safer. In other industries, this mistaken idea that technology can replace people and therefore eliminate errors has been termed the substitution myth (Sarter, Woods, & Billings, 1997). This myth contrasts with the reality of how technology changes the role of people, but does not eliminate them, and so technology changes rather than eliminates human errors.

The car of the future might not crash because the driver is drunk or distracted, but might suffer from deep learning errors where stop signs are missed because of a little snow or it might suffer common mode and network failures that lead to large multi-car crashes. Engineers thinking about safety tend to neglect these yet-to-be-seen failures as they focus on the benefits of technology. Demonstrating that automation can handle the crashes and near-crashes caused by human drivers may say little about the new types of crashes that automated vehicles might experience. It is easy to see the 94% of crashes due to human error, but the exposure of automated vehicles is essentially zero and so we have not had a chance to see the range of automation failures that will occur. Likewise, the millions of crashes drivers skillfully avoid tend to go unobserved and unappreciated by engineers seeking to eliminate driver error. The availability heuristic leads people to neglect abstract possibilities and what has not been experienced: out of sight out of mind  (Tversky & Kahneman, 1973). This feeds an optimism bias that exaggerates expectations and leaves blind spots in design and testing.

Automated vehicles might be safer, but people might feel less safe

Even if self-driving vehicles are safer, people might not feel that they are safer. Even though flying is far safer than driving few people fear driving, but many fear flying. The risk engineers calculate is not the risk people feel, and it is the risk people perceive that guides acceptance of new technology. Risk perception depends on both cognitive appraisal and emotional response, with the emotional response having a greater influence (Loewenstein, Weber, Hsee, & Welch, 2001).

Two dimensions of hazardous situations affect perceived risk: whether it is controlled and limited in its consequences, and whether it is known and observable (Slovic, 1987). Uncontrollable, consequential, and unobservable risks constitute dread risk. Nuclear reactor accidents and terrorist attacks are uncontrolled and unobserved and are perceived as dread risk. Automated vehicles have elements of dread risk: they remove control and aspect of their behavior is not easy to observe. People perceive dread risks as 1000 times riskier than known and controllable risks (Slovic, 1987), and so dread risk can disproportionately affect policy and behavior. If drivers view automated vehicles in terms of dread risk, automated vehicles need to reduce the rate of fatalities from approximately 35,000 per year to 35 per year for drivers to perceive automated vehicles as being as safe as manual driving.

The effect of dread risk associated with the terrorist attacks of September 11, 2001 demonstrates how dread risk might undermine the promise of self-driving vehicles. After the terrorist attacks, people avoided flying and drove more, which increased motor vehicle fatalities by a total that equaled that of the terrorist attacks (Blalock, Kadiyali, & Simon, 2009; Gigerenzer, 2004). Applied to self-driving vehicles, failing to mitigate dread risk could substantially undermine the potential of vehicle automation to improve traffic safety. Even if vehicles dramatically reduce crash rates they will likely change the perception of risk from that of being controlled and known, to something closer to dread risk, potentially undermining people’s willingness to use automated vehicles (Lee & Kolodge, 2018).

 

Trust in automation depends on stories, not statistics

Further widening the gap between how people think about risk and how engineers think about risk is the tendency of people to focus on instances rather than the aggregate. People think about stories rather than statistics. Automated vehicles are likely to fail in surprising, and possibly inscrutable ways, and often in situations that a person would have been able to easily respond to the event. Such “easy errors” tend to receive disproportionate attention and undermine trust in automation (Madhavan, Weigmann, & Lacson, 2006). Stories people construct around these errors can spread and undermine public trust in vehicle automation(Lee & See, 2004). Most drivers feel that they drive more safely than the average driver, and this optimism bias further undermines their appreciation for the capabilities of the automation (Svenson, 1981). Just as engineers tend not to appreciate how well drivers avoid the crashes that did not happen, riders in self-driving cars will likely fail to appreciate the crashes the car avoided.

 

References

Blalock, G., Kadiyali, V., & Simon, D. H. (2009). Driving fatalities after 9/11: A hidden cost of terrorism. Applied Economics, 41(14), 1717–1729.

Gigerenzer, G. (2004). Dread risk, September 11, and fatal traffic accidents. Psychological Science,15(4).

Lee, J. D., & Kolodge, K. (2018). Understanding attitudes towards self-driving vehicles: Quantitative analysis of qualitative data. Proceedings of the Human and Ergonomics Society Annual Meeting.

Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80.

Loewenstein, G. F., Weber, E. U., Hsee, C. K., & Welch, N. (2001). Risk as feelings. Psychological Bulletin, 127(2), 267–286. Retrieved from

Madhavan, P., Weigmann, D. A., & Lacson, F. C. (2006). Automation failures on tasks easily performed by operators undermine trust in automated aids. Human Factors, 48(2), 241–256.

Sarter, N. B., Woods, D. D., & Billings, C. E. (1997). Automation surprises. In G. Salvendy (Ed.), Handbook of Human Factors and Ergonomics(2nd ed., pp. 1926–1943). New York: Wiley.

Slovic, P. (1987). Perception of risk. Science, 236(4799), 280–285.

Svenson, O. (1981). Are we all less risky and more skillful than our fellow drivers? Acta Psychologica, 47, 143–148.

Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency. Cognitive Psychology, 5, 207–232.

You may also like...