53 research outputs found

    Anthropomorphizing Robots: The Effect of Framing in Human-Robot Collaboration

    Get PDF
    Anthropomorphic framing of social robots is widely believed to facilitate human-robot interaction. In two subsequent studies, the impact of anthropomorphic framing was examined regarding the subjective perception of a robot and the willingness to donate money for this robot. In both experiments, participants received either an anthropomorphic or a functional description of a humanoid NAO robot prior to a cooperative task. Afterwards the perceived robot’s humanlike perception and the willingness to “save” the robot from malfunctioning were assessed (donation behavior). Surprisingly, the first study revealed a negative effect of anthropomorphic framing on the willingness to donate. This negative effect disappeared if the robot’s functional value for the task fulfillment was additionally made explicit (Study 2). In both studies, no effect of anthropomorphic framing on the humanlike perception of the robot was found. However, the behavioral results support the relevance of a functional awareness in social human-robot interaction

    Anthropomorphizing Robots

    Get PDF
    Anthropomorphic framing of social robots is widely believed to facilitate human-robot interaction. In two subsequent studies, the impact of anthropomorphic framing was examined regarding the subjective perception of a robot and the willingness to donate money for this robot. In both experiments, participants received either an anthropomorphic or a functional description of a humanoid NAO robot prior to a cooperative task. Afterwards the perceived robot’s humanlike perception and the willingness to “save” the robot from malfunctioning were assessed (donation behavior). Surprisingly, the first study revealed a negative effect of anthropomorphic framing on the willingness to donate. This negative effect disappeared if the robot’s functional value for the task fulfillment was additionally made explicit (Study 2). In both studies, no effect of anthropomorphic framing on the humanlike perception of the robot was found. However, the behavioral results support the relevance of a functional awareness in social human-robot interaction.Peer Reviewe

    The Effect of Anthropomorphism and Failure Comprehensibility on Human-Robot Trust

    Get PDF
    The application of anthropomorphic features to robots is generally considered to be beneficial for human- robot interaction. Although previous research has mainly focused on social robots, the phenomenon gains increasing attention in industrial human-robot interaction, as well. In this study, the impact of anthropomorphic design of a collaborative industrial robot on the dynamics of trust is examined. Participants interacted with a robot, which was either anthropomorphically or technically designed and experienced either a comprehensible or an incomprehensible fault of the robot. Unexpectedly, the robot was perceived as less reliable in the anthropomorphic condition. Additionally, trust increased after faultless experience and decreased after failure experience independently of the type of error. Even though the manipulation of the design did not result in a different perception of the robot’s anthropomorphism, it still influenced the formation of trust. The results emphasize that anthropomorphism is no universal remedy to increase trust, but highly context dependent.Peer Reviewe

    The effect of risk on trust attitude and trust behavior in interaction with information and decision automation

    Get PDF
    Situational risk has been postulated to be one of the most important contextual factors affecting operator’s trust in automation. However, experimentally, it has received only little attention and was directly manipulated even less. To close this gap, this study used a virtual reality multi-task environment where the main task entailed making a diagnosis by assessing different parameters. Risk was manipulated via the altitude, the task was set in including the possibility of virtually falling in case of a mistake. Participants were aided either by information or decision automation. Results revealed that trust attitude toward the automation was not affected by risk. While trust attitude was initially lower for the decision automation, it was equally high in both groups at the end of the experiment after experiencing reliable support. Trust behavior was significantly higher and increased during the experiment for the decision automation supported group in the form of less automation verification behavior. However, this detrimental effect was distinctly attenuated under high risk. This implies that negative consequences of decision automation in the real world might have been overestimated by studies not incorporating risk.Peer Reviewe

    A New Experimental Paradigm to Manipulate Risk in Human-Automation Research

    Get PDF
    Objective Two studies serve as a manipulation check of a new experimental multi-task paradigm that can be applied to human-automation research (Virtual Reality Testbed for Risk and Automation Studies; ViRTRAS), in which a subjectively experienceable risk can be manipulated as part of a virtual reality environment. Background Risk has been postulated as an important contextual factor affecting human-automation interaction. However, experimental evidence is scarce due to the difficulty operationalizing risk in an ethical way. In the new paradigm, risk is varied by the altitude at which participants carry out the task, including the possibility of virtually falling in case of a mistake. Method Key components of the paradigm were used to investigate participants’ risk perception in a low (0.5 m) and high altitude (70 m) using subjective self-reports and objective behavioral measures. Results In the high-altitude condition risk perception was significantly higher with medium to large effect sizes. In addition, results of the behavioral measures reveal that participants habituated with length of exposure. However, this habituation seems to occur similarly in both altitude conditions. Conclusion The manipulation checks were successful. The new paradigm is a promising tool for automation research. It incorporates the contextual factor of risk and creates a situation which is more comparable to what real-life operators experience. Additionally, it meets the same requirements of other multi-task environments in human-automation research. Application The new paradigm provides the basis to vary the contextual factor of risk in human-automation research, which has previously been either neglected or operationalized in an arguably inferior way.Peer Reviewe

    VR Investigation on Caregivers’ Tolerance towards Communication and Processing Failures

    Get PDF
    This article was supported by the German Research Foundation (DFG) and the Open Access Publication Fund of Humboldt-Universität zu Berlin.Robots are increasingly used in healthcare to support caregivers in their daily work routines. To ensure an effortless and easy interaction between caregivers and robots, communication via natural language is expected from robots. However, robotic speech bears a large potential for technical failures, which includes processing and communication failures. It is therefore necessary to investigate how caregivers perceive and respond to robots with erroneous communication. We recruited thirty caregivers, who interacted in a virtual reality setting with a robot. It was investigated whether different kinds of failures are more likely to be forgiven with technical or human-like justifications. Furthermore, we determined how tolerant caregivers are with a robot constantly returning a process failure and whether this depends on the robot’s response pattern (constant vs. variable). Participants showed the same forgiveness towards the two justifications. However, females liked the human-like justification more and males liked the technical justification more. Providing justifications with any reasonable content seems sufficient to achieve positive effects. Robots with a constant response pattern were liked more, although both patterns achieved the same tolerance threshold from caregivers, which was around seven failed requests. Due to the experimental setup, the tolerance for communication failures was probably increased and should be adjusted in real-life situations

    Human Performance Consequences of Automated Decision Aids in States of Sleep Loss

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.Objective: The authors investigated how human performance consequences of automated decision aids are affected by the degree of automation and the operator’s functional state. Background: As research has shown, decision aids may not only improve performance but also lead to new sorts of risks. Whereas knowledge exists about the impact of system characteristics (e.g., reliability) on human performance, little is known about how these performance consequences are moderated by the functional state of operators. Method: Participants performed a simulated supervisory process control task with one of two decision aids providing support for fault identification and management. One session took place during the day, and another one took place during the night after a prolonged waking phase of more than 20 hr. Results: Results showed that decision aids can support humans effectively in maintaining high levels of performance, even in states of sleep loss, with more highly automated aids being more effective than less automated ones. Furthermore, participants suffering from sleep loss were found to be more careful in interaction with the aids, that is, less prone to effects of complacency and automation bias. However, cost effects arose that included a decline in secondary-task performance and an increased risk of return-to-manual performance decrements. Conclusion: Automation support can help protect performance after a period of extended wakefulness. In addition, operators suffering from sleep loss seem to compensate for their impaired functional state by reallocating resources and showing a more attentive behavior toward possible automation failures. Application: Results of this research can inform the design of automation, especially decision aids

    Misuse of Automation: The Impact of System Experience on Complacency and Automation Bias in Interaction with Automated Aids

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.The study investigates how complacency and automation bias effects in interaction with automated aids are moderated by system experience. Participants performed a supervisory control task supported by an aid for fault identification and management. Groups differed with respect to how long they worked with the aid until eventually an automation failure occurred, and whether this failure was the first or second one the participants were exposed to. Results show that negative experiences, i.e., automation failures, entail stronger effects on subjective trust in automation as well as the level of complacency and automation bias than positive experiences (correct recommendations of the aid). Furthermore, results suggest that commission errors may be due to three different sorts of effects: (1) a withdrawal of attention in terms of incomplete cross-checks of information, (2) an active discounting of contradictory system information, and (3) an inattentive processing of contradictory information analogue to a “looking-but-not-seeing” effect

    Humans Can’t Resist Robot Eyes – Reflexive Cueing With Pseudo-Social Stimuli

    Get PDF
    Joint attention is a key mechanism for humans to coordinate their social behavior. Whether and how this mechanism can benefit the interaction with pseudo-social partners such as robots is not well understood. To investigate the potential use of robot eyes as pseudo-social cues that ease attentional shifts we conducted an online study using a modified spatial cueing paradigm. The cue was either a non-social (arrow), a pseudo-social (two versions of an abstract robot eye), or a social stimulus (photographed human eyes) that was presented either paired (e.g. two eyes) or single (e.g. one eye). The latter was varied to separate two assumed triggers of joint attention: the social nature of the stimulus, and the additional spatial information that is conveyed only by paired stimuli. Results support the assumption that pseudo-social stimuli, in our case abstract robot eyes, have the potential to facilitate human-robot interaction as they trigger reflexive cueing. To our surprise, actual social cues did not evoke reflexive shifts in attention. We suspect that the robot eyes elicited the desired effects because they were human-like enough while at the same time being much easier to perceive than human eyes, due to a design with strong contrasts and clean lines. Moreover, results indicate that for reflexive cueing it does not seem to make a difference if the stimulus is presented single or paired. This might be a first indicator that joint attention depends rather on the stimulus’ social nature or familiarity than its spatial expressiveness. Overall, the study suggests that using paired abstract robot eyes might be a good design practice for fostering a positive perception of a robot and to facilitate joint attention as a precursor for coordinated behavior.Peer Reviewe

    Human Performance Consequences of Automated Decision Aids : The Impact of Degree of Automation and System Experience

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.Two experiments are reported that investigate to what extent performance consequences of automated aids are dependent on the distribution of functions between human and automation and on the experience an operator has with an aid. In the first experiment, performance consequences of three automated aids for the support of a supervisory control task were compared. Aids differed in degree of automation (DOA). Compared with a manual control condition, primary and secondary task performance improved and subjective workload decreased with automation support, with effects dependent on DOA. Performance costs include return-to-manual performance issues that emerged for the most highly automated aid and effects of complacency and automation bias, respectively, which emerged independent of DOA. The second experiment specifically addresses how automation bias develops over time and how this development is affected by prior experience with the system. Results show that automation failures entail stronger effects than positive experience (reliably working aid). Furthermore, results suggest that commission errors in interaction with automated aids can depend on three sorts of automation bias effects: (a) withdrawal of attention in terms of incomplete cross-checking of information, (b) active discounting of contradictory system information, and (c) inattentive processing of contradictory information analog to a “looking-but-not-seeing” effect
    • …
    corecore