243 research outputs found

    De eerste receptie van Kants filosofie in Nederland

    Get PDF

    Anthropomorphizing Robots: The Effect of Framing in Human-Robot Collaboration

    Get PDF
    Anthropomorphic framing of social robots is widely believed to facilitate human-robot interaction. In two subsequent studies, the impact of anthropomorphic framing was examined regarding the subjective perception of a robot and the willingness to donate money for this robot. In both experiments, participants received either an anthropomorphic or a functional description of a humanoid NAO robot prior to a cooperative task. Afterwards the perceived robot’s humanlike perception and the willingness to “save” the robot from malfunctioning were assessed (donation behavior). Surprisingly, the first study revealed a negative effect of anthropomorphic framing on the willingness to donate. This negative effect disappeared if the robot’s functional value for the task fulfillment was additionally made explicit (Study 2). In both studies, no effect of anthropomorphic framing on the humanlike perception of the robot was found. However, the behavioral results support the relevance of a functional awareness in social human-robot interaction

    Anthropomorphizing Robots

    Get PDF
    Anthropomorphic framing of social robots is widely believed to facilitate human-robot interaction. In two subsequent studies, the impact of anthropomorphic framing was examined regarding the subjective perception of a robot and the willingness to donate money for this robot. In both experiments, participants received either an anthropomorphic or a functional description of a humanoid NAO robot prior to a cooperative task. Afterwards the perceived robot’s humanlike perception and the willingness to “save” the robot from malfunctioning were assessed (donation behavior). Surprisingly, the first study revealed a negative effect of anthropomorphic framing on the willingness to donate. This negative effect disappeared if the robot’s functional value for the task fulfillment was additionally made explicit (Study 2). In both studies, no effect of anthropomorphic framing on the humanlike perception of the robot was found. However, the behavioral results support the relevance of a functional awareness in social human-robot interaction.Peer Reviewe

    The Effect of Anthropomorphism and Failure Comprehensibility on Human-Robot Trust

    Get PDF
    The application of anthropomorphic features to robots is generally considered to be beneficial for human- robot interaction. Although previous research has mainly focused on social robots, the phenomenon gains increasing attention in industrial human-robot interaction, as well. In this study, the impact of anthropomorphic design of a collaborative industrial robot on the dynamics of trust is examined. Participants interacted with a robot, which was either anthropomorphically or technically designed and experienced either a comprehensible or an incomprehensible fault of the robot. Unexpectedly, the robot was perceived as less reliable in the anthropomorphic condition. Additionally, trust increased after faultless experience and decreased after failure experience independently of the type of error. Even though the manipulation of the design did not result in a different perception of the robot’s anthropomorphism, it still influenced the formation of trust. The results emphasize that anthropomorphism is no universal remedy to increase trust, but highly context dependent.Peer Reviewe

    Human Performance Consequences of Automated Decision Aids in States of Sleep Loss

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.Objective: The authors investigated how human performance consequences of automated decision aids are affected by the degree of automation and the operator’s functional state. Background: As research has shown, decision aids may not only improve performance but also lead to new sorts of risks. Whereas knowledge exists about the impact of system characteristics (e.g., reliability) on human performance, little is known about how these performance consequences are moderated by the functional state of operators. Method: Participants performed a simulated supervisory process control task with one of two decision aids providing support for fault identification and management. One session took place during the day, and another one took place during the night after a prolonged waking phase of more than 20 hr. Results: Results showed that decision aids can support humans effectively in maintaining high levels of performance, even in states of sleep loss, with more highly automated aids being more effective than less automated ones. Furthermore, participants suffering from sleep loss were found to be more careful in interaction with the aids, that is, less prone to effects of complacency and automation bias. However, cost effects arose that included a decline in secondary-task performance and an increased risk of return-to-manual performance decrements. Conclusion: Automation support can help protect performance after a period of extended wakefulness. In addition, operators suffering from sleep loss seem to compensate for their impaired functional state by reallocating resources and showing a more attentive behavior toward possible automation failures. Application: Results of this research can inform the design of automation, especially decision aids

    VR Investigation on Caregivers’ Tolerance towards Communication and Processing Failures

    Get PDF
    This article was supported by the German Research Foundation (DFG) and the Open Access Publication Fund of Humboldt-Universität zu Berlin.Robots are increasingly used in healthcare to support caregivers in their daily work routines. To ensure an effortless and easy interaction between caregivers and robots, communication via natural language is expected from robots. However, robotic speech bears a large potential for technical failures, which includes processing and communication failures. It is therefore necessary to investigate how caregivers perceive and respond to robots with erroneous communication. We recruited thirty caregivers, who interacted in a virtual reality setting with a robot. It was investigated whether different kinds of failures are more likely to be forgiven with technical or human-like justifications. Furthermore, we determined how tolerant caregivers are with a robot constantly returning a process failure and whether this depends on the robot’s response pattern (constant vs. variable). Participants showed the same forgiveness towards the two justifications. However, females liked the human-like justification more and males liked the technical justification more. Providing justifications with any reasonable content seems sufficient to achieve positive effects. Robots with a constant response pattern were liked more, although both patterns achieved the same tolerance threshold from caregivers, which was around seven failed requests. Due to the experimental setup, the tolerance for communication failures was probably increased and should be adjusted in real-life situations

    Misuse of Automation: The Impact of System Experience on Complacency and Automation Bias in Interaction with Automated Aids

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.The study investigates how complacency and automation bias effects in interaction with automated aids are moderated by system experience. Participants performed a supervisory control task supported by an aid for fault identification and management. Groups differed with respect to how long they worked with the aid until eventually an automation failure occurred, and whether this failure was the first or second one the participants were exposed to. Results show that negative experiences, i.e., automation failures, entail stronger effects on subjective trust in automation as well as the level of complacency and automation bias than positive experiences (correct recommendations of the aid). Furthermore, results suggest that commission errors may be due to three different sorts of effects: (1) a withdrawal of attention in terms of incomplete cross-checks of information, (2) an active discounting of contradictory system information, and (3) an inattentive processing of contradictory information analogue to a “looking-but-not-seeing” effect

    Human Performance Consequences of Automated Decision Aids : The Impact of Degree of Automation and System Experience

    Get PDF
    Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.Two experiments are reported that investigate to what extent performance consequences of automated aids are dependent on the distribution of functions between human and automation and on the experience an operator has with an aid. In the first experiment, performance consequences of three automated aids for the support of a supervisory control task were compared. Aids differed in degree of automation (DOA). Compared with a manual control condition, primary and secondary task performance improved and subjective workload decreased with automation support, with effects dependent on DOA. Performance costs include return-to-manual performance issues that emerged for the most highly automated aid and effects of complacency and automation bias, respectively, which emerged independent of DOA. The second experiment specifically addresses how automation bias develops over time and how this development is affected by prior experience with the system. Results show that automation failures entail stronger effects than positive experience (reliably working aid). Furthermore, results suggest that commission errors in interaction with automated aids can depend on three sorts of automation bias effects: (a) withdrawal of attention in terms of incomplete cross-checking of information, (b) active discounting of contradictory system information, and (c) inattentive processing of contradictory information analog to a “looking-but-not-seeing” effect
    • …
    corecore