research

Who Should Bear the Risk When Self-Driving Vehicles Crash?

Abstract

The moral importance of liability to harm has so far been ignored in the lively debate about what self-driving vehicles should be programmed to do when an accident is inevitable. But liability matters a great deal to just distribution of risk of harm. While morality sometimes requires simply minimizing relevant harms, this is not so when one party is liable to harm in virtue of voluntarily engaging in activity that foreseeably creates a risky situation, while having reasonable alternatives. On plausible assumptions, merely choosing to use a self-driving vehicle typically gives rise to a degree of liability, so that such vehicles should be programmed to shift the risk from bystanders to users, other things being equal. Insofar vehicles cannot be programmed to take all the factors affecting liability into account, there is a pro tanto moral reason not to introduce them, or restrict their use

    Similar works