8 research outputs found

    Customer Responses to (Im)Moral Behavior of Service Robots - Online Experiments in a Retail Setting

    Get PDF
    Service robots play an increasingly important role in the service sector. Drawing on moral psychology research, moral foundations theory as well as the computers-as-social-actors (CASA) paradigm, this experimental study containing of four online experiments examines the extent to which the moral or immoral behavior of a service robot affects customer responses during a service interaction. This study contributes to design science by defining, conceptualizing and operationalizing morality of service robots and developing a corresponding vignette as basis to manipulate (im)moral robotic behavior in a retail setting. To investigate possible effects of the robot’s appearance, we tested our hypotheses with two different robots, i.e., a humanoid robot and an android robot. Results from the online experiment indicate that the (im)moral behavior of service robots at the customer interface has a significant effect on customers’ trust and customers’ ethical concerns towards the robot

    Can a robot lie?

    Get PDF
    The potential capacity for robots to deceive has received considerable attention recently. Many papers focus on the technical possibility for a robot to engage in deception for beneficial purposes (e.g. in education or health). In this short experimental paper, I focus on a more paradigmatic case: Robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment with 399 participants which explores the following three questions: (i) Are ordinary people willing to ascribe intentions to deceive to artificial agents? (ii) Are they as willing to judge a robot lie as a lie as they would be when human agents engage in verbal deception? (iii) Do they blame a lying artificial agent to the same extent as a lying human agent? The response to all three questions is a resounding yes. This, I argue, implies that robot deception and its normative consequences deserve considerably more attention than it presently attracts

    Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany

    Get PDF
    Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situation in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow’s (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (i) people manifest a considerable willingness to hold autonomous systems morally responsible, (ii) partially exculpate human agents when interacting with such systems, and that more generally (iii) the possibility of normative responsibility gaps is indeed at odds with people’s pronounced retributivist inclinations. We discuss what these results mean for potential implications of the retribution gap and other positions in the responsibility gap literature

    Moral Uncanny Valley revisited – how human expectations of robot morality based on robot appearance moderate the perceived morality of robot decisions in high conflict moral dilemmas

    Get PDF
    In recent years a new sub-field of moral psychology has emerged: moral psychology of AI and robotics. In this field there are several outstanding questions on how robot appearance and other perceived properties of the robots influences the way their decisions are evaluated. Researchers have observed that robot decision are not treated identically to human decisions, even if their antecedents and consequences are identical to human decisions. To study this moral judgment asymmetry effect further, two studies with a series of high conflict moral dilemmas were conducted: Study 1 – which used photorealistic full body imagery -- revealed that utilitarian decisions by human or non-creepy (i.e., nice) looking robotic agents were less condemned than “creepy” (i.e., unease inducing) robots, whereas “creepy” robots received higher moral approval when making deontological decisions. Furthermore, an exploratory analysis demonstrated that the creepiest robot did not cause moral surprise or disappointment when making utilitarian decisions. However, Study 2 showed that mere symbolic representation of the agent’s face did not trigger the Moral Uncanny Valley (where decisions of creepy robots are perceived negatively), suggesting that the effect is dependent on the photorealistic appearance of the agent. These results are in tension with some previous findings in robot moral judgment literature. Future research should focus on creating standardized stimuli for studying moral decisions involving robots and elucidating the complex interactions between agent appearance, decision type, and pre-decision expectations. This work deepens our understanding of the relationship between a decision-making agent’s appearance and the moral judgment of their decisions. The findings have significant implications for the design and implementation of autonomous agents in morally charged situations

    Research on the influence and mechanism of human–vehicle moral matching on trust in autonomous vehicles

    Get PDF
    IntroductionAutonomous vehicles can have social attributes and make ethical decisions during driving. In this study, we investigated the impact of human-vehicle moral matching on trust in autonomous vehicles and its mechanism.MethodsA 2*2 experiment involving 200 participants was conducted.ResultsThe results of the data analysis show that utilitarian moral individuals have greater trust than deontological moral individuals. Perceived value and perceived risk play a double-edged role in people’s trust in autonomous vehicles. People’s moral type has a positive impact on trust through perceived value and a negative impact through perceived risk. Vehicle moral type moderates the impact of human moral type on trust through perceived value and perceived risk.DiscussionThe conclusion shows that heterogeneous moral matching (people are utilitarian, vehicles are deontology) has a more positive effect on trust than homogenous moral matching (both people and vehicles are deontology or utilitarian), which is consistent with the assumption of selfish preferences of individuals. The results of this study provide theoretical expansion for the fields related to human-vehicle interaction and AI social attributes and provide exploratory suggestions for the functional design of autonomous vehicles

    GOD ON TRIAL: ARE OUR MORAL JUDGMENTS DIFFERENT BASED ON WHETHER WE ARE JUDGING GOD OR HUMANS?

    Get PDF
    Past work in moral psychology has demonstrated that individuals’ judgments of other humans in hypothetical moral scenarios can be influenced by variables such as intentionality, causality and controllability. However, while empirical studies suggest that individuals similarly hold nonhuman agents such as robots morally accountable for their actions to the extent that they are perceived to possess humanlike attributes important for moral judgments, research is scant when God is introduced as a nonhuman agent. On one hand it is proposed that because people anthropomorphize God, our moral intuitions of humans and God tend to show similar effects. In this case, both humans and God should be morally blamed when they are perceived to have engaged in a moral transgression. On the other hand, opinion polls suggest that the public at large generally agrees that belief in God(s) is necessary for one to be moral. By extension, our moral intuitions of God and humans should diverge significantly. Both perspectives offer different predictions about how people morally judge God and humans. This study attempts to test both perspectives by examining whether moral judgments of God show similar patterns to the moral judgments of a human (anthropomorphic perspective) or if judgments are biased toward God even when an immoral deed has occurred (Divine Command perspective). A 2 (Target: human vs God) x 2 (Morality of scenario: moral vs immoral) x 3 (Scenarios: sexual assault vs robbery vs murder) mixed model design was conducted to examine both hypotheses. Exploratory variables (i.e., Morality Founded on Divine Authority (MFDA) scale, religiosity and gender) were also included to test for potential moderation effects. Initial results suggest that people’s moral intuitions of humans and God do diverge, and this effect was moderated only by the MFDA scale. Limitations, implications and possible alternative explanations are discussed

    “Sorry, It Was My Fault”: Repairing Trust in Human-Robot Interactions

    Get PDF
    Robots have been playing an increasingly important role in human life, but their performance is yet far from perfection. Based on extant literature in interpersonal, organizational, and human-machine communication, the current study develops a three-fold categorization of technical failures (i.e., logic, semantic, and syntax failures) commonly observed in human-robot interactions from the interactants’ end, investigating it together with four trust repair strategies: internal-attribution apology, external-attribution apology, denial, and no repair. The 743 observations conducted through an online experiment reveals there exist some nuances in participants’ perceived division between competence- and integrity-based trust violations, given the ontological differences between humans and machines. The findings also suggest prior propositions about trust repair from the perspective of attribution theory only explain part of the variance, in addition to some significant main effects of failure types and repair methods on HRI-based trust

    Ética 4.0: dilemas morais nos cuidados de saĂșde mediados por robĂŽs sociais

    Get PDF
    A InteligĂȘncia Artificial e os robĂŽs sociais nos cuidados de saĂșde trazem um novo campo de investigação interdisciplinar. Neste estudo examinĂĄmos os julgamentos morais das pessoas acerca da reação de uma agente de saĂșde perante uma paciente que recusa uma medicação. Para o efeito, desenvolvemos um dilema moral que variou em função da agente (humana vs. robĂŽ), decisĂŁo (respeito Ă  autonomia vs. beneficĂȘncia/nĂŁo-maleficĂȘncia) e argumentação (benefĂ­cio Ă  saĂșde vs. prejuĂ­zo Ă  saĂșde). AvaliĂĄmos a aceitabilidade moral da decisĂŁo, a responsabilidade moral da agente e os seus traços de amabilidade, competĂȘncia e confiabilidade, atribuĂ­dos por 524 participantes (350 mulheres; 316 brasileiros, 179 portugueses; 18-77 anos), aleatorizados por 8 vinhetas, num desenho inter-sujeitos aplicado atravĂ©s de um inquĂ©rito online. Os julgamentos de aceitabilidade moral foram mais elevados na decisĂŁo do respeito Ă  autonomia da paciente, evidĂȘncia similar para as duas agentes. A responsabilização moral e a perceção de amabilidade foram superiores para a humana em relação Ă  robĂŽ. NĂŁo houve diferenças na perceção de competĂȘncia e confiabilidade das agentes. As agentes que respeitaram a autonomia foram percebidas como muito mais amĂĄveis, com uma dimensĂŁo de efeito superior aos outros atributos, mas menos competentes e confiĂĄveis que as agentes que decidiram pela beneficĂȘncia/nĂŁo-maleficĂȘncia. As agentes que priorizaram a beneficĂȘncia/nĂŁo-maleficĂȘncia e argumentaram acerca do benefĂ­cio Ă  saĂșde foram consideradas mais confiĂĄveis do que nas demais interaçÔes entre a decisĂŁo e a argumentação. Esta investigação contribui para a compreensĂŁo dos julgamentos morais no contexto dos cuidados de saĂșde mediados por agentes tanto humanos como artificiais.Artificial Intelligence and social robots in healthcare bring a new interdisciplinary field of research. In this study, we have examined people's moral judgments about a healthcare agent's reaction to a patient who refuses a medication. For this purpose, we have developed a moral dilemma that was varied according to the type of healthcare agent (human vs. robot), decision (respect for autonomy vs. beneficence/non-maleficence), and argumentation (health benefit vs. health harm). We have assessed the decision’s moral acceptability, the agent’s moral responsibility, and her traits of warmth, competence, and trustworthiness assigned by 524 participants (350 women; 316 Brazilian, 179 Portuguese; 18-77 years old) randomized into 8 vignettes, in an inter-subject design that was applied using an online survey. Moral acceptability judgments were higher in the decision to respect patient autonomy, similar evidence for both agents. Moral responsibility and perceived warmth were higher for the human agent than for the robot, and there were no differences in the agents' perceived competence and trustworthiness. Agents who have respected autonomy were perceived as much warmer, with a higher effect dimension than the other attributes, but less competent and trustworthy than agents who have decided for beneficence/non-maleficence. Agents who have prioritized beneficence/non-maleficence and argued about the health benefit were perceived as more trustworthy than in the other interactions between decision and argumentation. This research contributes to the understanding of moral judgments in the context of healthcare mediated by both humans and artificial agents
    corecore