14,737 research outputs found

    The effect of embodiment and competence on trust and cooperation in human-agent interaction

    Get PDF
    Kulms P, Kopp S. The effect of embodiment and competence on trust and cooperation in human-agent interaction. In: Intelligent Virtual Agents. 2016: 75-84

    Would You Trust a (Faulty) Robot? : Effects of Error, Task Type and Personality on Human-Robot Cooperation and Trust

    Get PDF
    How do mistakes made by a robot affect its trustworthiness and acceptance in human-robot collaboration? We investigate how the perception of erroneous robot behavior may influence human interaction choices and the willingness to cooperate with the robot by following a number of its unusual requests. For this purpose, we conducted an experiment in which participants interacted with a home companion robot in one of two experimental conditions: (1) the correct mode or (2) the faulty mode. Our findings reveal that, while significantly affecting subjective perceptions of the robot and assessments of its reliability and trustworthiness, the robot's performance does not seem to substantially influence participants' decisions to (not) comply with its requests. However, our results further suggest that the nature of the task requested by the robot, e.g. whether its effects are revocable as opposed to irrevocable, has a signicant im- pact on participants' willingness to follow its instructions

    Robot Mindreading and the Problem of Trust

    Get PDF
    This paper raises three questions regarding the attribution of beliefs, desires, and intentions to robots. The first one is whether humans in fact engage in robot mindreading. If they do, this raises a second question: does robot mindreading foster trust towards robots? Both of these questions are empirical, and I show that the available evidence is insufficient to answer them. Now, if we assume that the answer to both questions is affirmative, a third and more important question arises: should developers and engineers promote robot mindreading in view of their stated goal of enhancing transparency? My worry here is that by attempting to make robots more mind-readable, they are abandoning the project of understanding automatic decision processes. Features that enhance mind-readability are prone to make the factors that determine automatic decisions even more opaque than they already are. And current strategies to eliminate opacity do not enhance mind-readability. The last part of the paper discusses different ways to analyze this apparent trade-off and suggests that a possible solution must adopt tolerable degrees of opacity that depend on pragmatic factors connected to the level of trust required for the intended uses of the robot

    Feel, Don\u27t Think Review of the Application of Neuroscience Methods for Conversational Agent Research

    Get PDF
    Conversational agents (CAs) equipped with human-like features (e.g., name, avatar) have been reported to induce the perception of humanness and social presence in users, which can also increase other aspects of users’ affection, cognition, and behavior. However, current research is primarily based on self-reported measurements, leaving the door open for errors related to the self-serving bias, socially desired responding, negativity bias and others. In this context, applying neuroscience methods (e.g., EEG or MRI) could provide a means to supplement current research. However, it is unclear to what extent such methods have already been applied and what future directions for their application might be. Against this background, we conducted a comprehensive and transdisciplinary review. Based on our sample of 37 articles, we find an increased interest in the topic after 2017, with neural signal and trust/decision-making as upcoming areas of research and five separate research clusters, describing current research trends

    Mr. and Mrs. Conversational Agent - Gender Stereotyping in Judge-Advisor Systems and the Role of Egocentric Bias

    Get PDF
    Current technological advancements of conversational agents (CAs) promise new potentials for human-computer collaborations. Yet, both practitioners and researchers face challenges in designing these information systems, such that CAs not only increase in intelligence but also in effectiveness. Drawing on social response theory as well as literature on trust and judge-advisor systems, we examine the roles of gender stereotyping and egocentric bias in cooperative CAs. Specifically, by conducting an online experiment with 87 participants, we investigate the effects of a CA’s gender and a user’s subjective knowledge in two stereotypical male knowledge fields. The results indicate (1) that female (vs. male) CAs and stereotypical female (vs. male) traits increase a user’s perceived competence of CAs and (2) that an increase in a user’s subjective knowledge decreases trusting intentions in CAs. Thus, our contributions provide new and counterintuitive insights that are crucial for the effectiveness of cooperative CAs

    A Comparison of Avatar-, Video-, and Robot-Mediated Interaction on Users’ Trust in Expertise

    Get PDF
    Communication technologies are becoming increasingly diverse in form and functionality. A central concern is the ability to detect whether others are trustworthy. Judgments of trustworthiness rely, in part, on assessments of non-verbal cues, which are affected by media representations. In this research, we compared trust formation on three media representations. We presented 24 participants with advisors represented by two of the three alternate formats: video, avatar, or robot. Unknown to the participants, one was an expert, and the other was a non-expert. We observed participants’ advice-seeking behavior under risk as an indicator of their trust in the advisor. We found that most participants preferred seeking advice from the expert, but we also found a tendency for seeking robot or video advice. Avatar advice, in contrast, was more rarely sought. Users’ self-reports support these findings. These results suggest that when users make trust assessments, the physical presence of the robot representation might compensate for the lack of identity cues

    Comparing Robot and Human guided Personalization: Adaptive Exercise Robots are Perceived as more Competent and Trustworthy

    Get PDF
    Schneider S, Kummert F. Comparing Robot and Human guided Personalization: Adaptive Exercise Robots are Perceived as more Competent and Trustworthy. INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS. 2020.Learning and matching a user's preference is an essential aspect of achieving a productive collaboration in long-term Human-Robot Interaction (HRI). However, there are different techniques on how to match the behavior of a robot to a user's preference. The robot can be adaptable so that a user can change the robot's behavior to one's need, or the robot can be adaptive and autonomously tries to match its behavior to the user's preference. Both types might decrease the gap between a user's preference and the actual system behavior. However, the Level of Automation (LoA) of the robot is different between both methods. Either the user controls the interaction, or the robot is in control. We present a study on the effects of different LoAs of a Socially Assistive Robot (SAR) on a user's evaluation of the system in an exercising scenario. We implemented an online preference learning system and a user-adaptable system. We conducted a between-subject design study (adaptable robot vs. adaptive robot) with 40 subjects and report our quantitative and qualitative results. The results show that users evaluate the adaptive robots as more competent, warm, and report a higher alliance. Moreover, this increased alliance is significantly mediated by the perceived competence of the system. This result provides empirical evidence for the relation between the LoA of a system, the user's perceived competence of the system, and the perceived alliance with it. Additionally, we provide evidence for a proof-of-concept that the chosen preference learning method (i.e., Double Thompson Sampling (DTS)) is suitable for online HRI
    • …
    corecore