4,515 research outputs found

    Ethics 4.0: ethical dilemmas in healthcare mediated by social robots

    Get PDF
    This study examined people's moral judgments and trait perception toward a healthcare agent's response to a patient who refuses to take medication. A sample of 524 participants was randomly assigned to one of eight vignettes in which the type of healthcare agent (human vs. robot), the use of a health message framing (emphasizing health-losses for not taking vs. health-gains in taking the medication), and the ethical decision (respect the autonomy vs. beneficence/nonmaleficence) were manipulated to investigate their effects on moral judgments (acceptance and responsibility) and traits perception (warmth, competence, trustworthiness). The results indicated that moral acceptance was higher when the agents respected the patient's autonomy than when the agents prioritized beneficence/nonmaleficence. Moral responsibility and perceived warmth were higher for the human agent than for the robot, and the agent who respected the patient's autonomy was perceived as warmer, but less competent and trustworthy than the agent who decided for the patient's beneficence/nonmaleficence. Agents who prioritized beneficence/nonmaleficence and framed the health gains were also perceived as more trustworthy. Our findings contribute to the understanding of moral judgments in the healthcare domain mediated by both healthcare humans and artificial agents.info:eu-repo/semantics/publishedVersio

    Are robots morally culpable? The role of intentionality and anthropomorphism

    Get PDF
    Culpability for one’s actions arguably hinges on their intentions: A negative outcome is judged more harshly when done purposely versus accidentally (Zelazo, Helwig, & Lau, 1996). However, do children similarly apply this rule to a robot? And is this affected by their propensity to anthropomorphize? To investigate these questions, we tested 3- and 5-year-olds’ inferences of intentions and culpability of two agents (human and robot) and whether their judgments were influenced by their general tendency to anthropomorphize. Participants (current N=63; 46% female) in two age groups (3 years: n=32, M=3.60 years, SD=.58; 5 years: n=31, M=5.55 years, SD=.33) were randomly assigned to condition: human, robot (socially contingent or non-contingent), or control. In the Dumbbell Task (Meltzoff, 1995), participants observed a video of either a human or robot (socially-contingent or non-contingent) attempting to pull apart a wooden dumbbell (i.e., intended-but-failed action). The participant was then given the dumbbell. If children understood the agent as intentional (i.e., the agent was trying to pull the dumbbell apart), they should complete the intended-but-failed action (pull dumbbell apart). Children who observed the robot or human agent’s intended-but-failed action were significantly more likely to pull the dumbbell apart than controls who did not observe the intended-but-failed action (psp=.55), gender (p=.83), or robot or human conditions (ps\u3e.86). In the Tower Task, participants viewed a video of the human or robot observing a person building a block tower, after which the human or robot agent knocked over the tower in a manner that could be construed as accidental or intentional. Participants judged the agent’s action in terms of acceptability, punishment, and intentionality (‘on accident’ or ‘on purpose’). ‘Culpability scores’ were calculated as the difference between acceptability and punishment judgments (higher culpability scores indicated lower acceptability and deserving greater punishment). Children who thought the agent intentionally (versus accidentally) knocked over the tower viewed the act as less acceptable (M=1.36 vs. M=1.86, t(59)=2.13, p=.04), more deserving of punishment (M=3.28 vs. M=2.51, t(59)=-2.40, p=.02), and had higher culpability scores (M=1.88 vs. M=0.66, t(57)=2.61, p=.01). Children viewed the human as more culpable than the robot, as evidenced by higher culpability scores (p=.04). Finally, participants were administered the Individual Differences in Anthropomorphism Questionnaire-Child Form (Severson & Lemm, 2016). Children who scored higher on anthropomorphism viewed the robot, but not human, as more deserving of punishment (r=.51, p=.01) and more culpable (r=.39, p=.01). Anthropomorphism was not linked to inferences of intentionality on the Dumbbell Task. Taken together, children inferred a robot has intentions to the same degree as a human, and interpretations of intentionality were linked to moral culpability. Yet, children viewed the robot as less culpable than a human. Importantly, children with greater tendencies to anthropomorphize were more likely to view the robot as morally culpable for its actions. These results provide converging evidence that children ascribe mental states to robots, consistent with previous research. In addition, the results provide evidence on how children’s tendencies to anthropomorphize contributes to their judgments about robots’ moral responsibility

    Ética 4.0: dilemas morais nos cuidados de saĂșde mediados por robĂŽs sociais

    Get PDF
    A InteligĂȘncia Artificial e os robĂŽs sociais nos cuidados de saĂșde trazem um novo campo de investigação interdisciplinar. Neste estudo examinĂĄmos os julgamentos morais das pessoas acerca da reação de uma agente de saĂșde perante uma paciente que recusa uma medicação. Para o efeito, desenvolvemos um dilema moral que variou em função da agente (humana vs. robĂŽ), decisĂŁo (respeito Ă  autonomia vs. beneficĂȘncia/nĂŁo-maleficĂȘncia) e argumentação (benefĂ­cio Ă  saĂșde vs. prejuĂ­zo Ă  saĂșde). AvaliĂĄmos a aceitabilidade moral da decisĂŁo, a responsabilidade moral da agente e os seus traços de amabilidade, competĂȘncia e confiabilidade, atribuĂ­dos por 524 participantes (350 mulheres; 316 brasileiros, 179 portugueses; 18-77 anos), aleatorizados por 8 vinhetas, num desenho inter-sujeitos aplicado atravĂ©s de um inquĂ©rito online. Os julgamentos de aceitabilidade moral foram mais elevados na decisĂŁo do respeito Ă  autonomia da paciente, evidĂȘncia similar para as duas agentes. A responsabilização moral e a perceção de amabilidade foram superiores para a humana em relação Ă  robĂŽ. NĂŁo houve diferenças na perceção de competĂȘncia e confiabilidade das agentes. As agentes que respeitaram a autonomia foram percebidas como muito mais amĂĄveis, com uma dimensĂŁo de efeito superior aos outros atributos, mas menos competentes e confiĂĄveis que as agentes que decidiram pela beneficĂȘncia/nĂŁo-maleficĂȘncia. As agentes que priorizaram a beneficĂȘncia/nĂŁo-maleficĂȘncia e argumentaram acerca do benefĂ­cio Ă  saĂșde foram consideradas mais confiĂĄveis do que nas demais interaçÔes entre a decisĂŁo e a argumentação. Esta investigação contribui para a compreensĂŁo dos julgamentos morais no contexto dos cuidados de saĂșde mediados por agentes tanto humanos como artificiais.Artificial Intelligence and social robots in healthcare bring a new interdisciplinary field of research. In this study, we have examined people's moral judgments about a healthcare agent's reaction to a patient who refuses a medication. For this purpose, we have developed a moral dilemma that was varied according to the type of healthcare agent (human vs. robot), decision (respect for autonomy vs. beneficence/non-maleficence), and argumentation (health benefit vs. health harm). We have assessed the decision’s moral acceptability, the agent’s moral responsibility, and her traits of warmth, competence, and trustworthiness assigned by 524 participants (350 women; 316 Brazilian, 179 Portuguese; 18-77 years old) randomized into 8 vignettes, in an inter-subject design that was applied using an online survey. Moral acceptability judgments were higher in the decision to respect patient autonomy, similar evidence for both agents. Moral responsibility and perceived warmth were higher for the human agent than for the robot, and there were no differences in the agents' perceived competence and trustworthiness. Agents who have respected autonomy were perceived as much warmer, with a higher effect dimension than the other attributes, but less competent and trustworthy than agents who have decided for beneficence/non-maleficence. Agents who have prioritized beneficence/non-maleficence and argued about the health benefit were perceived as more trustworthy than in the other interactions between decision and argumentation. This research contributes to the understanding of moral judgments in the context of healthcare mediated by both humans and artificial agents

    Ethics of Artificial Intelligence

    Get PDF
    Artificial intelligence (AI) is a digital technology that will be of major importance for the development of humanity in the near future. AI has raised fundamental questions about what we should do with such systems, what the systems themselves should do, what risks they involve and how we can control these. - After the background to the field (1), this article introduces the main debates (2), first on ethical issues that arise with AI systems as objects, i.e. tools made and used by humans; here, the main sections are privacy (2.1), manipulation (2.2), opacity (2.3), bias (2.4), autonomy & responsibility (2.6) and the singularity (2.7). Then we look at AI systems as subjects, i.e. when ethics is for the AI systems themselves in machine ethics (2.8.) and artificial moral agency (2.9). Finally we look at future developments and the concept of AI (3). For each section within these themes, we provide a general explanation of the ethical issues, we outline existing positions and arguments, then we analyse how this plays out with current technologies and finally what policy conse-quences may be drawn

    Do Cute Nursing Robots Get a Free Pass? : Exploring How a Robot’s Appearance Influences Human Judgments on Forced Medication Decision

    Get PDF
    Tavoitteet. Sosiaalisten robottien ulkonÀön vaikutuksia ihmisiin ei ole juuri tutkittu, vaikka sosiaaliset robotit yleistyvĂ€t jatkuvasti ammatillisessa, etenkin terveydenhuollon, kontekstissa. Tutkielman tavoitteena on tutkia ensin eroja ihmis- ja robottisairaanhoitajan suorittaman pakkolÀÀkinnĂ€n moraalisessa tuomitsemisessa. TĂ€mĂ€n jĂ€lkeen tarkastelen jatkotutkimuksella, onko sairaanhoitorobotin ulkonÀöllĂ€ ja söpöydellĂ€ vaikutusta pakkolÀÀkinnĂ€n moraaliseen tuomitsemiseen. Sairaanhoitorobotteja tarkastellaan söpöysskaalalla—söpöys valikoitui mielekkÀÀksi muuttujaksi, koska suuri osa eri aloilla kĂ€ytettĂ€vistĂ€ sosiaalisista roboteista on korostetun söpöjĂ€. Aiempien tutkimusten perusteella viehĂ€ttĂ€vĂ€ ulkonĂ€kö ja söpöys vaikuttavat havaitsijan tekemÀÀn arviointiin ja pÀÀtöksentekoon. Tarkastelen tutkielmassa söpöyden ja ulkonÀön vaikutuksia sosiaalisten robottien kontekstissa. MenetelmĂ€t. Tutkielmaan kerĂ€ttiin kaksi kokeellista aineistoa, toinen pÀÀkaupunkiseudulta ja toinen Internet-tutkimuksena. Kokeissa oli 135 ja 214 osallistujaa. Tutkimuksissa koehenkilöt lukivat tarinanpĂ€tkĂ€n, joka pÀÀttyi sairaanhoitajan suorittamaan pakkolÀÀkintÀÀn tai potilaan tahdon noudattamiseen. Tarinan toimijan (ihmissairaanhoitaja tai robotti) pÀÀtöstĂ€ arvioitiin kysymysten avulla (kuinka loukkaava tai epĂ€inhimillinen pÀÀtös oli, oliko pÀÀtös oikea tai tarpeellinen). Koehenkilöt arvioivat kysymyksiĂ€ porrastetulla Likert-asteikolla (“TĂ€ysin eri mieltĂ€â€ – “TĂ€ysin samaa mieltĂ€â€). EnsimmĂ€isessĂ€ tutkimuksessa tarkasteltiin ihmis- ja robottisairaanhoitajan pakkolÀÀkintĂ€pÀÀtöstĂ€ neljĂ€llĂ€ satunnaistetulla koeasetelmalla. Toisessa tutkimuksessa tarkasteltiin vain sairaanhoitorobottia ja tarinanpĂ€tkÀÀn oli liitetty robotin kasvokuva, joka vaihteli skaalalla “ei söpö, keskivertosöpö, söpĂ¶â€. TarinanpĂ€tkistĂ€ ja robottien variaatioista muodostui kuusi koeasetelmaa, jotka oli satunnaistettu jĂ€rjestyksen ja koehenkilöiden sijoittamisen osalta. Tulokset ja johtopÀÀtökset. EnsimmĂ€isen tutkimuksen perusteella havaittiin, ettĂ€ robotti- ja ihmissairaanhoitajiin kohdistuvat moraaliset arviot eroavat toisistaan paitsi silloin, kun sairaanhoitorobotti noudattaa potilaan omaa tahtoa. Toisen tutkimuksen perusteella suoraa ulkonĂ€kövaikutusta moraalisiin arvioihin ei havaittu, joskin osa tuloksista oli lupaavia ja kiinnostavia lisĂ€tutkimusten kannalta

    Moral psychology of nursing robots : Exploring the role of robots in dilemmas of patient autonomy

    Get PDF
    Artificial intelligences (AIs) are widely used in tasks ranging from transportation to healthcare and military, but it is not yet known how people prefer them to act in ethically difficult situations. In five studies (an anthropological field study, n = 30, and four experiments, total n = 2150), we presented people with vignettes where a human or an advanced robot nurse is ordered by a doctor to forcefully medicate an unwilling patient. Participants were more accepting of a human nurse's than a robot nurse's forceful medication of the patient, and more accepting of (human or robot) nurses who respected patient autonomy rather than those that followed the orders to forcefully medicate (Study 2). The findings were robust against the perceived competence of the robot (Study 3), moral luck (whether the patient lived or died afterwards; Study 4), and command chain effects (Study 5; fully automated supervision or not). Thus, people prefer robots capable of disobeying orders in favour of abstract moral principles like valuing personal autonomy. Our studies fit in a new era in research, where moral psychological phenomena no longer reflect only interactions between people, but between people and autonomous AIs.Peer reviewe

    On Automating the Doctrine of Double Effect

    Full text link
    The doctrine of double effect (DDE\mathcal{DDE}) is a long-studied ethical principle that governs when actions that have both positive and negative effects are to be allowed. The goal in this paper is to automate DDE\mathcal{DDE}. We briefly present DDE\mathcal{DDE}, and use a first-order modal logic, the deontic cognitive event calculus, as our framework to formalize the doctrine. We present formalizations of increasingly stronger versions of the principle, including what is known as the doctrine of triple effect. We then use our framework to simulate successfully scenarios that have been used to test for the presence of the principle in human subjects. Our framework can be used in two different modes: One can use it to build DDE\mathcal{DDE}-compliant autonomous systems from scratch, or one can use it to verify that a given AI system is DDE\mathcal{DDE}-compliant, by applying a DDE\mathcal{DDE} layer on an existing system or model. For the latter mode, the underlying AI system can be built using any architecture (planners, deep neural networks, bayesian networks, knowledge-representation systems, or a hybrid); as long as the system exposes a few parameters in its model, such verification is possible. The role of the DDE\mathcal{DDE} layer here is akin to a (dynamic or static) software verifier that examines existing software modules. Finally, we end by presenting initial work on how one can apply our DDE\mathcal{DDE} layer to the STRIPS-style planning model, and to a modified POMDP model.This is preliminary work to illustrate the feasibility of the second mode, and we hope that our initial sketches can be useful for other researchers in incorporating DDE in their own frameworks.Comment: 26th International Joint Conference on Artificial Intelligence 2017; Special Track on AI & Autonom

    GOD ON TRIAL: ARE OUR MORAL JUDGMENTS DIFFERENT BASED ON WHETHER WE ARE JUDGING GOD OR HUMANS?

    Get PDF
    Past work in moral psychology has demonstrated that individuals’ judgments of other humans in hypothetical moral scenarios can be influenced by variables such as intentionality, causality and controllability. However, while empirical studies suggest that individuals similarly hold nonhuman agents such as robots morally accountable for their actions to the extent that they are perceived to possess humanlike attributes important for moral judgments, research is scant when God is introduced as a nonhuman agent. On one hand it is proposed that because people anthropomorphize God, our moral intuitions of humans and God tend to show similar effects. In this case, both humans and God should be morally blamed when they are perceived to have engaged in a moral transgression. On the other hand, opinion polls suggest that the public at large generally agrees that belief in God(s) is necessary for one to be moral. By extension, our moral intuitions of God and humans should diverge significantly. Both perspectives offer different predictions about how people morally judge God and humans. This study attempts to test both perspectives by examining whether moral judgments of God show similar patterns to the moral judgments of a human (anthropomorphic perspective) or if judgments are biased toward God even when an immoral deed has occurred (Divine Command perspective). A 2 (Target: human vs God) x 2 (Morality of scenario: moral vs immoral) x 3 (Scenarios: sexual assault vs robbery vs murder) mixed model design was conducted to examine both hypotheses. Exploratory variables (i.e., Morality Founded on Divine Authority (MFDA) scale, religiosity and gender) were also included to test for potential moderation effects. Initial results suggest that people’s moral intuitions of humans and God do diverge, and this effect was moderated only by the MFDA scale. Limitations, implications and possible alternative explanations are discussed

    Hazardous machinery:The assignment of agency and blame to robots versus non-autonomous machines

    Get PDF
    Autonomous robots increasingly perform functions that are potentially hazardous and could cause injury to people (e.g., autonomous driving). When this happens, questions will arise regarding responsibility, although autonomy complicates this issue – insofar as robots seem to control their own behaviour, where would blame be assigned? Across three experiments, we examined whether robots involved in harm are assigned agency and, consequently, blamed. In Studies 1 and 2, people assigned more agency to machines involved in accidents when they were described as ‘autonomous robots’ (vs. ‘machines’), and in turn, blamed them more, across a variety of contexts. In Study 2, robots and machines were assigned similar experience, and we found no evidence for a role of experience in blaming robots over machines. In Study 3, people assigned more agency and blame to a more (vs. less) sophisticated military robot involved in a civilian fatality. Humans who were responsible for robots' safe operation, however, were blamed similarly whether harms involved a robot (vs. machine; Study 1), or a more (vs. less; Study 3) sophisticated robot. These findings suggest that people spontaneously conceptualise robots' autonomy via humanlike agency, and consequently, consider them blameworthy agents
    • 

    corecore