7 research outputs found

    User Study Exploring the Role of Explanation of Failures by Robots in Human Robot Collaboration Tasks

    Full text link
    Despite great advances in what robots can do, they still experience failures in human-robot collaborative tasks due to high randomness in unstructured human environments. Moreover, a human's unfamiliarity with a robot and its abilities can cause such failures to repeat. This makes the ability to failure explanation very important for a robot. In this work, we describe a user study that incorporated different robotic failures in a human-robot collaboration (HRC) task aimed at filling a shelf. We included different types of failures and repeated occurrences of such failures in a prolonged interaction between humans and robots. The failure resolution involved human intervention in form of human-robot bidirectional handovers. Through such studies, we aim to test different explanation types and explanation progression in the interaction and record humans.Comment: Contributed to the: "The Imperfectly Relatable Robot: An interdisciplinary workshop on the role of failure in HRI", ACM/IEEE International Conference on Human-Robot Interaction HRI 2023. Video can be found at: https://sites.google.com/view/hri-failure-ws/teaser-video

    Evaluating the effect of theory of mind on people’s trust in a faulty robot

    Get PDF
    © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.The success of human-robot interaction is strongly affected by the people's ability to infer others' intentions and behaviours, and the level of people's trust that others will abide by their same principles and social conventions to achieve a common goal. The ability of understanding and reasoning about other agents' mental states is known as Theory of Mind (ToM). ToM and trust, therefore, are key factors in the positive outcome of human-robot interaction. We believe that a robot endowed with a ToM is able to gain people's trust, even when this may occasionally make errors. In this work, we present a user study in the field in which participants (N=123) interacted with a robot that may or may not have a ToM, and may or may not exhibit erroneous behaviour. Our findings indicate that a robot with ToM is perceived as more reliable, and they trusted it more than a robot without a ToM even when the robot made errors. Finally, ToM results to be a key driver for tuning people's trust in the robot even when the initial condition of the interaction changed (i.e., loss and regain of trust in a longer relationship).This work has been supported by Italian PON R&I 2014-2020 - REACT-EU (CUP E65F21002920003); by the European Union’s Horizon 2020 under ERC Advanced Grant CLOTHILDE (no. 741930); by MCIN/ AEI /10.13039/501100011033 and by the ”European Union NextGenerationEU/PRTR” under the project ROBIN (PLEC2021-007859) and the project COHERENT (PCI2020-120718-2); by the ”European Union NextGenerationEU/PRTR” through CSIC’s Thematic Platforms (PTI+ Neuro-Aging) and by the European Union’s Horizon 2020 research and innovation programme PRO-CARED (no.801342) under Marie SkƂodowska-Curie grant agreement (Tecniospring INDUSTRY) and the Government of Catalonia’s Agency for Business Competitiveness (ACCIO).Peer ReviewedPostprint (author's final draft

    Human-agent trust relationships in a real-time collaborative game

    Get PDF
    Collaborative virtual agents are often deployed to help users make decisions in real-time. For this collaboration to work, users must adequately trust the agents that they are interacting with. In my research, I use a game where human-agent interactions are recorded via a logging system and survey instruments in order to explore this trust relationship. I then study the impact that different agents have on reliance, performance, cognitive load and trust. I seek to understand which aspects of an agent influence the development of trust the most. With this research, I hope to pave the way for trust-aware agents, capable of adapting their behaviours to users in real-time

    Friendly but Faulty: A Pilot Study on the Perceived Trust of Older Adults in a Social Robot

    Get PDF
    The efforts to promote ageing-in-place of healthy older adults via cybernetic support are fundamental to avoid possible consequences associated with relocation to facilities, including the loss of social ties and autonomy, and feelings of loneliness. This requires an understanding of key factors that affect the involvement of robots in eldercare and the elderly willingness to embrace the robots’ domestic use. Trust is argued to be the main foundation of an effective adult-care provider, which might be more significant if such providers are robots. Establishing, and maintaining trust usually involves two main dimensions: 1) the robot’s reliability (i.e., performance) and 2) the robot’s intrinsic attributes, including its degree of anthropomorphism and benevolence. We conducted a pilot study using a mixed methods approach to explore the extent to which these dimensions and their interaction influenced elderly trust in a humanoid social robot. Using two independent variables, type of attitude (warm, cold) and type of conduct (error, no-error), we aimed to investigate if the older adult participants would trust a purposefully faulty robot when the robot exerted a warm behaviour enhanced with non-functional touch more than a robot that did not, and in what way the robot error affected trust. Lastly, we also investigated the relationship between trust and a proxy variable of actual use of robots (i.e., intention to use robots at home ). Given the volatile and context-dependent nature of trust, our close-to real-world scenario of elder-robot interaction involved the administration of health supplements, in which the severity of robot error might have a greater implication on the perceived trust

    “Sorry, It Was My Fault”: Repairing Trust in Human-Robot Interactions

    Get PDF
    Robots have been playing an increasingly important role in human life, but their performance is yet far from perfection. Based on extant literature in interpersonal, organizational, and human-machine communication, the current study develops a three-fold categorization of technical failures (i.e., logic, semantic, and syntax failures) commonly observed in human-robot interactions from the interactants’ end, investigating it together with four trust repair strategies: internal-attribution apology, external-attribution apology, denial, and no repair. The 743 observations conducted through an online experiment reveals there exist some nuances in participants’ perceived division between competence- and integrity-based trust violations, given the ontological differences between humans and machines. The findings also suggest prior propositions about trust repair from the perspective of attribution theory only explain part of the variance, in addition to some significant main effects of failure types and repair methods on HRI-based trust

    Development of a Social Robot as a Mediator for Intergenerational Gameplay & Development of a Canvas for the Conceptualisation of HRI Game Design

    Get PDF
    Intergenerational interaction between grandparents and grandchildren benefits both generations. The use of a social robot in mediating this interaction is a relatively unexplored area of research. Often Human-Robot Interaction (HRI) research uses the robot as a point of focus; this thesis puts the focus on the interaction between the generations, using a multi-stage study with a robot mediating the interaction in dyads of grandparents and grandchildren. The research questions guiding this thesis are: 1) How might a robot-mediated game be used to foster intergenerational gameplay? 2) What template can be created to conceptually describe HRI game systems? To answer the first question, the study design includes three stages: 1. Human mediator Stage (exploratory); 2. The Wizard-of-Oz (WoZ) Stage (where a researcher remotely controls the robot); 3. Fully/semi-autonomous Stage. A Tangram puzzle game was used to create an enjoyable, collaborative experience. Stage 1 of the study was conducted with four dyads of grandparents (52-74 years of age) and their grandchildren (7-9 years of age). The purpose of Stage 1 was to determine the following: 1. How do dyads of grandparent-grandchild perceive their collaboration in the Tangram game? 2. What role do the dyads envision for a social robot in the game? Results showed the dyads perceived high collaboration in the Tangram game, and saw the role of the robot as helping them by providing clues in the gameplay. The research team felt the game, in conjunction with the proposed setup, worked well for supporting collaboration and decided to use the same game with a similar setup for the next two stages. Although the design and development of the next stage were ready, the COVID-19 pandemic led to the suspension of in-person research. The second part of this thesis research focused on creating the Human-Robot Interaction Game Canvas (HRIGC), a novel way to conceptually model HRI game systems. A literature search of systematic ways to capture information, to assist in the design of the multi-stage study, yielded no appropriate tool, and prompted the creation of the HRIGC. The goal of the HRIGC is to help researchers think about, identify, and explore various aspects of designing an HRI game-based system. During the development process, the HRIGC was put through three case studies and two test runs: 1) Test run 1 with three researchers in HRI game design; 2) Test run 2 with four Human-Computer Interaction (HCI) researchers of different backgrounds. The case studies and test runs showed HRIGC to be a promising tool in articulating the key aspects of HRI game design in an intuitive manner. Formal validation of the canvas is necessary to confirm this tool
    corecore