1,787 research outputs found

    The Role of Accounts and Apologies in Mitigating Blame toward Human and Machine Agents

    Get PDF
    Would you trust a machine to make life-or-death decisions about your health and safety? Machines today are capable of achieving much more than they could 30 years ago—and the same will be said for machines that exist 30 years from now. The rise of intelligence in machines has resulted in humans entrusting them with ever-increasing responsibility. With this has arisen the question of whether machines should be given equal responsibility to humans—or if humans will ever perceive machines as being accountable for such responsibility. For example, if an intelligent machine accidentally harms a person, should it be blamed for its mistake? Should it be trusted to continue interacting with humans? Furthermore, how does the assignment of moral blame and trustworthiness toward machines compare to such assignment to humans who harm others? I answer these questions by exploring differences in moral blame and trustworthiness attributed to human and machine agents who make harmful moral mistakes. Additionally, I examine whether the knowledge and type of reason, as well as apology, for the harmful incident affects perceptions of the parties involved. In order to fill the gaps in understanding between topics in moral psychology, cognitive psychology, and artificial intelligence, valuable information from each of these fields have been combined to guide the research study being presented herein

    Mindful Explanations: Prevalence and Impact of Mind Attribution in XAI Research

    Full text link
    When users perceive AI systems as mindful, independent agents, they hold them responsible instead of the AI experts who created and designed these systems. So far, it has not been studied whether explanations support this shift in responsibility through the use of mind-attributing verbs like "to think". To better understand the prevalence of mind-attributing explanations we analyse AI explanations in 3,533 explainable AI (XAI) research articles from the Semantic Scholar Open Research Corpus (S2ORC). Using methods from semantic shift detection, we identify three dominant types of mind attribution: (1) metaphorical (e.g. "to learn" or "to predict"), (2) awareness (e.g. "to consider"), and (3) agency (e.g. "to make decisions"). We then analyse the impact of mind-attributing explanations on awareness and responsibility in a vignette-based experiment with 199 participants. We find that participants who were given a mind-attributing explanation were more likely to rate the AI system as aware of the harm it caused. Moreover, the mind-attributing explanation had a responsibility-concealing effect: Considering the AI experts' involvement lead to reduced ratings of AI responsibility for participants who were given a non-mind-attributing or no explanation. In contrast, participants who read the mind-attributing explanation still held the AI system responsible despite considering the AI experts' involvement. Taken together, our work underlines the need to carefully phrase explanations about AI systems in scientific writing to reduce mind attribution and clearly communicate human responsibility.Comment: 21 pages, 6 figures, to be published in PACM HCI (CSCW '24

    Responses to Catastrophic AGI Risk: A Survey

    Get PDF
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale ('catastrophic risk'). After summarizing the arguments for why AGI may pose such a risk, we review the fieldĘĽs proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to their internal design

    Remedies for Robots

    Get PDF
    What happens when artificially intelligent robots misbehave? The question is not just hypothetical. As robotics and artificial intelligence systems increasingly integrate into our society, they will do bad things. We seek to explore what remedies the law can and should provide once a robot has caused harm. Remedies are sometimes designed to make plaintiffs whole by restoring them to the condition they would have been in “but for” the wrong. But they can also contain elements of moral judgment, punishment, and deterrence. In other instances, the law may order defendants to do (or stop doing) something unlawful or harmful. Each of these goals of remedies law, however, runs into difficulties when the bad actor in question is neither a person nor a corporation but a robot. We might order a robot—or, more realistically, the designer or owner of the robot—to pay for the damages it causes. But it turns out to be much harder for a judge to “order” a robot, rather than a human, to engage in or refrain from certain conduct. Robots can’t directly obey court orders not written in computer code. And bridging the translation gap between natural language and code is often harder than we might expect. This is particularly true of modern artificial intelligence techniques that empower machines to learn and modify their decision-making over time. If we don’t know how the robot “thinks,” we won’t know how to tell it to behave in a way likely to cause it to do what we actually want it to do. Moreover, if the ultimate goal of a legal remedy is to encourage good behavior or discourage bad behavior, punishing owners or designers for the behavior of their robots may not always make sense—if only for the simple reason that their owners didn’t act wrongfully in any meaningful way. The same problem affects injunctive relief. Courts are used to ordering people and companies to do (or stop doing) certain things, with a penalty of contempt of court for noncompliance. But ordering a robot to abstain from certain behavior won’t be trivial in many cases. And ordering it to take affirmative acts may prove even more problematic. In this Article, we begin to think about how we might design a system of remedies for robots. Robots will require us to rethink many of our current doctrines. They also offer important insights into the law of remedies we already apply to people and corporations

    Responsibility and AI:Council of Europe Study DGI(2019)05

    Get PDF

    Machine Medical Ethics

    Get PDF
    In medical settings, machines are in close proximity with human beings: with patients who are in vulnerable states of health, who have disabilities of various kinds, with the very young or very old, and with medical professionals. Machines in these contexts are undertaking important medical tasks that require emotional sensitivity, knowledge of medical codes, human dignity, and privacy. As machine technology advances, ethical concerns become more urgent: should medical machines be programmed to follow a code of medical ethics? What theory or theories should constrain medical machine conduct? What design features are required? Should machines share responsibility with humans for the ethical consequences of medical actions? How ought clinical relationships involving machines to be modeled? Is a capacity for empathy and emotion detection necessary? What about consciousness? The essays in this collection by researchers from both humanities and science describe various theoretical and experimental approaches to adding medical ethics to a machine, what design features are necessary in order to achieve this, philosophical and practical questions concerning justice, rights, decision-making and responsibility, and accurately modeling essential physician-machine-patient relationships. This collection is the first book to address these 21st-century concerns

    Technologies on the stand:Legal and ethical questions in neuroscience and robotics

    Get PDF

    Metaphors Matter: Top-Down Effects on Anthropomorphism

    Get PDF
    Anthropomorphism, or the attribution of human mental states and characteristics to non-human entities, has been widely demonstrated to be cued automatically by certain bottom-up appearance and behavioral features in machines. In this thesis, I argue that the potential for top-down effects to influence anthropomorphism has so far been underexplored. I motivate and then report the results of a new empirical study suggesting that top-down linguistic cues, including anthropomorphic metaphors, personal pronouns, and other grammatical constructions, increase anthropomorphism of a robot. As robots and other machines become more integrated into human society and our daily lives, more thorough understanding of the process of anthropomorphism becomes more critical: the cues that cause it, the human behaviors elicited, the underlying mechanisms in human cognition, and the implications of our influenced thought, talk, and treatment of robots for our social and ethical frameworks. In these regards, as I argue in this thesis and as the results of the new empirical study suggest, the top-down effects matter
    • …
    corecore