22,436 research outputs found

    Trusting the Moral Judgments of a Robot: Perceived Moral Competence and Humanlikeness of a GPT-3 Enabled AI

    Get PDF
    Advancements in computing power and foundational modeling have enabled artificial intelligence (AI) to respond to moral queries with surprising accuracy. This raises the question of whether we trust AI to influence human moral decision-making, so far, a uniquely human activity. We explored how a machine agent trained to respond to moral queries (Delphi, Jiang et al., 2021) is perceived by human questioners. Participants were tasked with querying the agent with the goal of figuring out whether the agent, presented as a humanlike robot or a web client, was morally competent and could be trusted. Participants rated the moral competence and perceived morality of both agents as high yet found it lacking because it could not provide justifications for its moral judgments. While both agents were also rated highly on trustworthiness, participants had little intention to rely on such an agent in the future. This work presents an important first evaluation of a morally competent algorithm integrated with a human-like platform that could advance the development of moral robot advisors

    Ethics 4.0: ethical dilemmas in healthcare mediated by social robots

    Get PDF
    This study examined people's moral judgments and trait perception toward a healthcare agent's response to a patient who refuses to take medication. A sample of 524 participants was randomly assigned to one of eight vignettes in which the type of healthcare agent (human vs. robot), the use of a health message framing (emphasizing health-losses for not taking vs. health-gains in taking the medication), and the ethical decision (respect the autonomy vs. beneficence/nonmaleficence) were manipulated to investigate their effects on moral judgments (acceptance and responsibility) and traits perception (warmth, competence, trustworthiness). The results indicated that moral acceptance was higher when the agents respected the patient's autonomy than when the agents prioritized beneficence/nonmaleficence. Moral responsibility and perceived warmth were higher for the human agent than for the robot, and the agent who respected the patient's autonomy was perceived as warmer, but less competent and trustworthy than the agent who decided for the patient's beneficence/nonmaleficence. Agents who prioritized beneficence/nonmaleficence and framed the health gains were also perceived as more trustworthy. Our findings contribute to the understanding of moral judgments in the healthcare domain mediated by both healthcare humans and artificial agents.info:eu-repo/semantics/publishedVersio

    Are robots morally culpable? The role of intentionality and anthropomorphism

    Get PDF
    Culpability for one’s actions arguably hinges on their intentions: A negative outcome is judged more harshly when done purposely versus accidentally (Zelazo, Helwig, & Lau, 1996). However, do children similarly apply this rule to a robot? And is this affected by their propensity to anthropomorphize? To investigate these questions, we tested 3- and 5-year-olds’ inferences of intentions and culpability of two agents (human and robot) and whether their judgments were influenced by their general tendency to anthropomorphize. Participants (current N=63; 46% female) in two age groups (3 years: n=32, M=3.60 years, SD=.58; 5 years: n=31, M=5.55 years, SD=.33) were randomly assigned to condition: human, robot (socially contingent or non-contingent), or control. In the Dumbbell Task (Meltzoff, 1995), participants observed a video of either a human or robot (socially-contingent or non-contingent) attempting to pull apart a wooden dumbbell (i.e., intended-but-failed action). The participant was then given the dumbbell. If children understood the agent as intentional (i.e., the agent was trying to pull the dumbbell apart), they should complete the intended-but-failed action (pull dumbbell apart). Children who observed the robot or human agent’s intended-but-failed action were significantly more likely to pull the dumbbell apart than controls who did not observe the intended-but-failed action (psp=.55), gender (p=.83), or robot or human conditions (ps\u3e.86). In the Tower Task, participants viewed a video of the human or robot observing a person building a block tower, after which the human or robot agent knocked over the tower in a manner that could be construed as accidental or intentional. Participants judged the agent’s action in terms of acceptability, punishment, and intentionality (‘on accident’ or ‘on purpose’). ‘Culpability scores’ were calculated as the difference between acceptability and punishment judgments (higher culpability scores indicated lower acceptability and deserving greater punishment). Children who thought the agent intentionally (versus accidentally) knocked over the tower viewed the act as less acceptable (M=1.36 vs. M=1.86, t(59)=2.13, p=.04), more deserving of punishment (M=3.28 vs. M=2.51, t(59)=-2.40, p=.02), and had higher culpability scores (M=1.88 vs. M=0.66, t(57)=2.61, p=.01). Children viewed the human as more culpable than the robot, as evidenced by higher culpability scores (p=.04). Finally, participants were administered the Individual Differences in Anthropomorphism Questionnaire-Child Form (Severson & Lemm, 2016). Children who scored higher on anthropomorphism viewed the robot, but not human, as more deserving of punishment (r=.51, p=.01) and more culpable (r=.39, p=.01). Anthropomorphism was not linked to inferences of intentionality on the Dumbbell Task. Taken together, children inferred a robot has intentions to the same degree as a human, and interpretations of intentionality were linked to moral culpability. Yet, children viewed the robot as less culpable than a human. Importantly, children with greater tendencies to anthropomorphize were more likely to view the robot as morally culpable for its actions. These results provide converging evidence that children ascribe mental states to robots, consistent with previous research. In addition, the results provide evidence on how children’s tendencies to anthropomorphize contributes to their judgments about robots’ moral responsibility

    Autonomous Weapons and the Nature of Law and Morality: How Rule-of-Law-Values Require Automation of the Rule of Law

    Get PDF
    While Autonomous Weapons Systems have obvious military advantages, there are prima facie moral objections to using them. By way of general reply to these objections, I point out similarities between the structure of law and morality on the one hand and of automata on the other. I argue that these, plus the fact that automata can be designed to lack the biases and other failings of humans, require us to automate the formulation, administration, and enforcement of law as much as possible, including the elements of law and morality that are operated by combatants in war. I suggest that, ethically speaking, deploying a legally competent robot in some legally regulated realm is not much different from deploying a more or less well-armed, vulnerable, obedient, or morally discerning soldier or general into battle, a police officer onto patrol, or a lawyer or judge into a trial. All feature automaticity in the sense of deputation to an agent we do not then directly control. Such relations are well understood and well-regulated in morality and law; so there is not much challenging philosophically in having robots be some of these agents — excepting the implications of the limits of robot technology at a given time for responsible deputation. I then consider this proposal in light of the differences between two conceptions of law. These are distinguished by whether each conception sees law as unambiguous rules inherently uncontroversial in each application; and I consider the prospects for robotizing law on each. Likewise for the prospects of robotizing moral theorizing and moral decision-making. Finally I identify certain elements of law and morality, noted by the philosopher Immanuel Kant, which robots can participate in only upon being able to set ends and emotionally invest in their attainment. One conclusion is that while affectless autonomous devices might be fit to rule us, they would not be fit to vote with us. For voting is a process for summing felt preferences, and affectless devices would have none to weigh into the sum. Since they don't care which outcomes obtain, they don't get to vote on which ones to bring about

    Robots, Autonomy, and Responsibility

    Get PDF
    We study whether robots can satisfy the conditions for agents fit to be held responsible in a normative sense, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. On the basis of Alfred R. Mele’s history-sensitive account of autonomy and responsibility it can be argued that even if robots were to have all the capacities usually required of moral agency, their history as products of engineering would undermine their autonomy and thus responsibility

    And the Robot Asked "What do you say I am?" Can Artificial Intelligence Help Theologians and Scientists Understand Free Moral Agency?

    Full text link
    Concepts of human beings as free and morally responsible agents are shared culturally by scientists and Christian theologians. Accomiplishments of the "artificial intelligence" (AI) branch of computer science now suggest the possibility of an advanced robot mimicking behaviors associated with free and morally responsible agency. The author analyzes some specific features theology has expected of such agency, inquiring whether appropriate AI resources are available for incorporating the features in robots. Waiving questions of whether such extraordinary robots will be constructed, the analysis indicates that they could be, furnishing useful new scientific resources for understanding moral agency

    Philosophical Signposts for Artificial Moral Agent Frameworks

    Get PDF
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of Artificial Moral Agents may consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency. At the very least, the said philosophical concepts may be treated as signposts for further research on how to truly account for the nature of Artificial Moral Agents
    • 

    corecore