Innocence over utilitarianism - Heightened Moral Standards for Robots in Rescue Dilemmas

Abstract

With rapid developments in robotics and artificial intelligence, the prospect of automating rescue operations and protecting trained professionals from life-threatening risk is becoming increasingly viable. What moral standards do people expect rescue robots to enforce? Previous research has emphasized the notion that robots are expected to conform to specifically utilitarian standards. In a series of seven experiments (total N = 3752) and one public survey (N = ~19 000), we compared people’s evaluations of human and robotic rescue agents in the context of boating accidents, while manipulating the victims’ negligence. Relative to human lifeguards, robots of various kinds are expected to save innocent lives, even when doing so entails sacrificing a larger number of negligent individuals (Studies 1-2b). This finding was replicated in a large-scale web survey (Study 3) and was found to reverse when matching the victims in their degree of negligence (Study 4). In sum, robots are not merely expected to be more utilitarian, but rather are held to higher moral standards altogether

    Similar works

    Full text

    thumbnail-image

    Available Versions