36 research outputs found
Anthropomorphizing Robots: The Effect of Framing in Human-Robot Collaboration
Anthropomorphic framing of social robots is widely believed to facilitate human-robot interaction. In two subsequent studies, the impact of anthropomorphic framing was examined regarding the subjective perception of a robot and the willingness to donate money for this robot. In both experiments, participants received either an anthropomorphic or a functional description of a humanoid NAO robot prior to a cooperative task. Afterwards the perceived robot’s humanlike perception and the willingness to “save” the robot from malfunctioning were assessed (donation behavior). Surprisingly, the first study revealed a negative effect of anthropomorphic framing on the willingness to donate. This negative effect disappeared if the robot’s functional value for the task fulfillment was additionally made explicit (Study 2). In both studies, no effect of anthropomorphic framing on the humanlike perception of the robot was found. However, the behavioral results support the relevance of a functional awareness in social human-robot interaction
The Effect of Anthropomorphism and Failure Comprehensibility on Human-Robot Trust
The application of anthropomorphic features to robots is generally considered to be beneficial for human- robot interaction. Although previous research has mainly focused on social robots, the phenomenon gains increasing attention in industrial human-robot interaction, as well. In this study, the impact of anthropomorphic design of a collaborative industrial robot on the dynamics of trust is examined. Participants interacted with a robot, which was either anthropomorphically or technically designed and experienced either a comprehensible or an incomprehensible fault of the robot. Unexpectedly, the robot was perceived as less reliable in the anthropomorphic condition. Additionally, trust increased after faultless experience and decreased after failure experience independently of the type of error. Even though the manipulation of the design did not result in a different perception of the robot’s anthropomorphism, it still influenced the formation of trust. The results emphasize that anthropomorphism is no universal remedy to increase trust, but highly context dependent.Peer Reviewe
Anthropomorphizing Robots
Anthropomorphic framing of social robots is widely believed to facilitate human-robot interaction. In two subsequent studies, the impact of anthropomorphic framing was examined regarding the subjective perception of a robot and the willingness to donate money for this robot. In both experiments, participants received either an anthropomorphic or a functional description of a humanoid NAO robot prior to a cooperative task. Afterwards the perceived robot’s humanlike perception and the willingness to “save” the robot from malfunctioning were assessed (donation behavior). Surprisingly, the first study revealed a
negative effect of anthropomorphic framing on the willingness to donate. This negative effect disappeared if the robot’s functional value for the task fulfillment was additionally made explicit (Study 2). In both studies, no effect of anthropomorphic framing on the humanlike perception of the robot was found. However, the behavioral results support the relevance of a functional awareness in social human-robot interaction.Peer Reviewe
The Influence of Distance and Lateral Offset of Follow Me Robots on User Perception
Robots that are designed to work in close proximity to humans are required to move and act in a way that ensures social acceptance by their users. Hence, a robot's proximal behavior toward a human is a main concern, especially in human-robot interaction that relies on relatively close proximity. This study investigated how the distance and lateral offset of “Follow Me” robots influences how they are perceived by humans. To this end, a Follow Me robot was built and tested in a user study for a number of subjective variables. A total of 18 participants interacted with the robot, with the robot's lateral offset and distance varied in a within-subject design. After each interaction, participants were asked to rate the movement of the robot on the dimensions of comfort, expectancy conformity, human likeness, safety, trust, and unobtrusiveness. Results show that users generally prefer robot following distances in the social space, without a lateral offset. However, we found a main influence of affinity for technology, as those participants with a high affinity for technology preferred closer following distances than participants with low affinity for technology. The results of this study show the importance of user-adaptiveness in human-robot-interaction.DFG, 414044773, Open Access Publizieren 2019 - 2020 / Technische Universität Berli
Feeling with a robot—the role of anthropomorphism by design and the tendency to anthropomorphize in human-robot interaction
The implementation of anthropomorphic features in regard to appearance and framing is widely supposed to increase empathy towards robots. However, recent research used mainly tasks that are rather atypical for daily human-robot interactions like sacrificing or destroying robots. The scope of the current study was to investigate the influence of anthropomorphism by design on empathy and empathic behavior in a more realistic, collaborative scenario. In this online experiment, participants collaborated either with an anthropomorphic or a technical looking robot and received either an anthropomorphic or a technical description of the respective robot. After the task completion, we investigated situational empathy by displaying a choice-scenario in which participants needed to decide whether they wanted to act empathically towards the robot (sign a petition or a guestbook for the robot) or non empathically (leave the experiment). Subsequently, the perception of and empathy towards the robot was assessed. The results revealed no significant influence of anthropomorphism on empathy and participants’ empathic behavior. However, an exploratory follow-up analysis indicates that the individual tendency to anthropomorphize might be crucial for empathy. This result strongly supports the importance to consider individual difference in human-robot interaction. Based on the exploratory analysis, we propose six items to be further investigated as empathy questionnaire in HRI
Determinants of Laypersons’ Trust in Medical Decision Aids: Randomized Controlled Trial
Background: Symptom checker apps are patient-facing decision support systems aimed at providing advice to laypersons on whether, where, and how to seek health care (disposition advice). Such advice can improve laypersons' self-assessment and ultimately improve medical outcomes. Past research has mainly focused on the accuracy of symptom checker apps' suggestions. To support decision-making, such apps need to provide not only accurate but also trustworthy advice. To date, only few studies have addressed the question of the extent to which laypersons trust symptom checker app advice or the factors that moderate their trust. Studies on general decision support systems have shown that framing automated systems (anthropomorphic or emphasizing expertise), for example, by using icons symbolizing artificial intelligence (AI), affects users' trust.
Objective: This study aims to identify the factors influencing laypersons' trust in the advice provided by symptom checker apps. Primarily, we investigated whether designs using anthropomorphic framing or framing the app as an AI increases users' trust compared with no such framing.
Methods: Through a web-based survey, we recruited 494 US residents with no professional medical training. The participants had to first appraise the urgency of a fictitious patient description (case vignette). Subsequently, a decision aid (mock symptom checker app) provided disposition advice contradicting the participants' appraisal, and they had to subsequently reappraise the vignette. Participants were randomized into 3 groups: 2 experimental groups using visual framing (anthropomorphic, 160/494, 32.4%, vs AI, 161/494, 32.6%) and a neutral group without such framing (173/494, 35%).
Results: Most participants (384/494, 77.7%) followed the decision aid's advice, regardless of its urgency level. Neither anthropomorphic framing (odds ratio 1.120, 95% CI 0.664-1.897) nor framing as AI (odds ratio 0.942, 95% CI 0.565-1.570) increased behavioral or subjective trust (P=.99) compared with the no-frame condition. Even participants who were extremely certain in their own decisions (ie, 100% certain) commonly changed it in favor of the symptom checker's advice (19/34, 56%). Propensity to trust and eHealth literacy were associated with increased subjective trust in the symptom checker (propensity to trust b=0.25; eHealth literacy b=0.2), whereas sociodemographic variables showed no such link with either subjective or behavioral trust.
Conclusions: Contrary to our expectation, neither the anthropomorphic framing nor the emphasis on AI increased trust in symptom checker advice compared with that of a neutral control condition. However, independent of the interface, most participants trusted the mock app's advice, even when they were very certain of their own assessment. Thus, the question arises as to whether laypersons use such symptom checkers as substitutes rather than as aids in their own decision-making. With trust in symptom checkers already high at baseline, the benefit of symptom checkers depends on interface designs that enable users to adequately calibrate their trust levels during usage
Why context matters: The influence of application domain on preferred degree of anthropomorphism and gender attribution in human–robot interaction
The application of anthropomorphic design features is widely believed to facilitate human–robot interaction. However, the preference for robots’ anthropomorphism is highly context sensitive, as different application domains induce different expectations towards robots. In this study the influence of application domain on the preferred degree of anthropomorphism is examined. Moreover, as anthropomorphic design can reinforce existing gender stereotypes of different work domains, gender associations were investigated. Therefore, participants received different context descriptions and subsequently selected and named one robot out of differently anthropomorphic robots in an online survey. The results indicate that lower degrees of anthropomorphism are preferred in the industrial domain and higher degrees of anthropomorphism in the social domain, whereas no clear preference was found in the service domain. Unexpectedly, mainly functional names were ascribed to the robots and if human names were chosen, male names were given more frequently than female names even in the social domain. The results support the assumption that the preferred degree of anthropomorphism depends on the context. Hence, the sociability of a domain might determine to what extent anthropomorphic design features are suitable. Furthermore, the results indicate that robots are overall associated more functional, than gendered (and if gendered then masculine). Therefore, the design features of robots should enhance functionalities, rather than specific gendered anthropomorphic attributes to avoid stereotypes and not further reinforce the association of masculinity and technology
Neville, Percy, and York, 1461-1485 : a study in the subordination of the North
Typescript.Digitized by Kansas Correctional Industrie