19 research outputs found

    FurNav: Development and Preliminary Study of a Robot Direction Giver

    Full text link
    When giving directions to a lost-looking tourist, would you first reference the street-names, cardinal directions, landmarks, or simply tell them to walk five hundred metres in one direction then turn left? Depending on the circumstances, one could reasonably make use of any of these direction giving styles. However, research on direction giving with a robot does not often look at how these different direction styles impact perceptions of the robots intelligence, nor does it take into account how users prior dispositions may impact ratings. In this work, we look at generating natural language for two navigation styles using a created system for a Furhat robot, before measuring perceived intelligence and animacy alongside users prior dispositions to robots in a small preliminary study (N=7). Our results confirm findings by previous work that prior negative attitudes towards robots correlates negatively with propensity to trust robots, and also suggests avenues for future research. For example, more data is needed to explore the link between perceived intelligence and direction style. We end by discussing our plan to run a larger scale experiment, and how to improve our existing study design.Comment: Author Accepted Manuscript, 4 pages, LBR Track, RO-MAN'23, 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), August 2023, Busan, South Kore

    Working with troubles and failures in conversation between humans and robots

    Get PDF
    In order to carry out human-robot collaborative tasks efficiently, robots have to be able to communicate with their human counterparts. In many applications, speech interfaces are deployed as a way to empower robots with the ability to communicate. Despite the progress made in speech recognition and (multi-modal) dialogue systems, such interfaces continue to be brittle in a number of ways and the experience of the failure of such interfaces is commonplace amongst roboticists. Surprisingly, a rigorous and complete analysis of communicative failures is still missing, and the technical literature is positively skewed towards the success and good performance of speech interfaces. In order to address this blind spot and investigate failures in conversations between humans and robots, an interdisciplinary effort is necessary. This workshop aims to raise awareness of said blind spot and provide a platform for discussing communicative troubles and failures in human-robot interactions and potentially related failures in non-robotic speech interfaces. We aim to bring together researchers studying communication in different fields, to start a scrupulous investigation into communicative failures, to begin working on a taxonomy of such failures, and enable a preliminary discussion on possible mitigating strategies. This workshop intends to be a venue where participants can freely discuss the failures they have encountered, to positively and constructively learn from them

    Definition, conceptualisation and measurement of trust

    Get PDF
    This report documents the program and the outcomes of Dagstuhl Seminar 21381 "Conversational Agent as Trustworthy Autonomous System (Trust-CA)". First, we present the abstracts of the talks delivered by the Seminar’s attendees. Then we report on the origin and process of our six breakout (working) groups. For each group, we describe its contributors, goals and key questions, key insights, and future research. The themes of the groups were derived from a pre-Seminar survey, which also led to a list of suggested readings for the topic of trust in conversational agents. The list is included in this report for references

    Robot Broken Promise? Repair strategies for mitigating loss of trust for repeated failures

    No full text
    Trust repair strategies are an important part of human-robot interaction. In this study, we investigate how repeated failures impact users’ trust and how we might mitigate them. Specifically, we look at different repair strategies in the form of apologies, with additional features to them such as warnings and promises. Through an online study, we explore these repair strategies for repeated failures in the form of robot incongruence, where there is a mismatch of verbal and non-verbal information given by the robot. Our results show that such incongruent robot behaviour has a significant overall negative impact on participants’ trust. We found that the robot making a promise, and then breaking it, results in a significant decrease in participants’ trust, when compared to a general apology as a repair strategy. These findings contribute to the research on trust repair strategies and, additionally, shed light on how robot failures, in the form of incongruences, impact participants’ trust

    A Meta-analysis of Vulnerability and Trust in Human-Robot Interaction

    No full text
    In human-robot interaction studies, trust is often defined as a process whereby a trustor makes themselves vulnerable to a trustee. The role of vulnerability however is often overlooked in this process but could play an important role in the gaining and maintenance of trust between users and robots. To better understand how vulnerability affects human-robot trust, we first reviewed the literature to create a conceptual model of vulnerability with four vulnerability categories. We then performed a meta-analysis, first to check the overall contribution of the variables included on trust. The results showed that overall, the variables investigated in our sample of studies have a positive impact on trust. We then conducted two multilevel moderator analysis to assess the effect of vulnerability on trust, including: 1) An intercept model that considers the relationship between our vulnerability categories; and 2) A non-intercept model that treats each vulnerability category as an independent predictor. Only model 2 was significant, suggesting that to build trust effectively, research should focus on improving robot performance in situations where the users is unsure how reliable the robot will be. As our vulnerability variable is derived from studies of human-robot interaction and human-human studies of risk, we relate our findings to these domains and make suggestions for future research avenues

    A Meta-analysis of Vulnerability and Trust in Human-Robot Interaction

    No full text
    In human-robot interaction studies, trust is often defined as a process whereby a trustor makes themselves vulnerable to a trustee. The role of vulnerability however is often overlooked in this process but could play an important role in the gaining and maintenance of trust between users and robots. To better understand how vulnerability affects human-robot trust, we first reviewed the literature to create a conceptual model of vulnerability with four vulnerability categories. We then performed a meta-analysis, first to check the overall contribution of the variables included on trust. The results showed that overall, the variables investigated in our sample of studies have a positive impact on trust. We then conducted two multilevel moderator analysis to assess the effect of vulnerability on trust, including: 1) An intercept model that considers the relationship between our vulnerability categories; and 2) A non-intercept model that treats each vulnerability category as an independent predictor. Only model 2 was significant, suggesting that to build trust effectively, research should focus on improving robot performance in situations where the users is unsure how reliable the robot will be. As our vulnerability variable is derived from studies of human-robot interaction and human-human studies of risk, we relate our findings to these domains and make suggestions for future research avenues
    corecore