656 research outputs found

    The impact of peoples' personal dispositions and personalities on their trust of robots in an emergency scenario

    Get PDF
    Humans should be able to trust that they can safely interact with their home companion robot. However, robots can exhibit occasional mechanical, programming or functional errors. We hypothesise that the severity of the consequences and the timing of a robot's different types of erroneous behaviours during an interaction may have different impacts on users' attitudes towards a domestic robot. First, we investigated human users' perceptions of the severity of various categories of potential errors that are likely to be exhibited by a domestic robot. Second, we used an interactive storyboard to evaluate participants' degree of trust in the robot after it performed tasks either correctly, or with 'small' or 'big' errors. Finally, we analysed the correlation between participants' responses regarding their personality, predisposition to trust other humans, their perceptions of robots, and their interaction with the robot. We conclude that there is correlation between the magnitude of an error performed by a robot and the corresponding loss of trust by the human towards the robot. Moreover we observed that some traits of participants' personalities (conscientiousness and agreeableness) and their disposition of trusting other humans (benevolence) significantly increased their tendency to trust a robot more during an emergency scenario.Peer reviewe

    A matter of consequences : Understanding the effects of robot errors on people's trust in HRI

    Get PDF
    © John Benjamins Publishing Company. This is the accepted manuscript version of an article which has been published in final form at https://doi.org/10.1075/is.21025.rosOn reviewing the literature regarding acceptance and trust in human-robot interaction (HRI), there are a number of open questions that needed to be addressed in order to establish effective collaborations between humans and robots in real-world applications. In particular, we identified four principal open areas that should be investigated to create guidelines for the successful deployment of robots in the wild. These areas are focused on: (1) the robot's abilities and limitations; in particular when it makes errors with different severity of consequences, (2) individual differences, (3) the dynamics of human-robot trust, and (4) the interaction between humans and robots over time. In this paper, we present two very similar studies, one with a virtual robot with human-like abilities, and one with a Care-O-bot 4 robot. In the first study, we create an immersive narrative using an interactive storyboard to collect responses of 154 participants. In the second study, 6 participants had repeated interactions over three weeks with a physical robot. We summarise and discuss the findings of our investigations of the effects of robots' errors on people's trust in robots for designing mechanisms that allow robots to recover from a breach of trust. In particular, we observed that robots' errors had greater impact on people's trust in the robot when the errors were made at the beginning of the interaction and had severe consequences. Our results also provided insights on how these errors vary according to the individuals' personalities, expectations and previous experiences.Peer reviewe

    Investigating Human Perceptions of Trust and Social Cues in Robots for Safe Human-Robot Interaction in Human-oriented Environments

    Get PDF
    As robots increasingly take part in daily living activities, humans will have to interact with them in domestic and other human-oriented environments. This thesis envisages a future where autonomous robots could be used as home companions to assist and collaborate with their human partners in unstructured environments without the support of any roboticist or expert. To realise such a vision, it is important to identify which factors (e.g. trust, participants’ personalities and background etc.) that influence people to accept robots’ as companions and trust the robots to look after their well-being. I am particularly interested in the possibility of robots using social behaviours and natural communications as a repair mechanism to positively influence humans’ sense of trust and companionship towards the robots. The main reason being that trust can change over time due to different factors (e.g. perceived erroneous robot behaviours). In this thesis, I provide guidelines for a robot to regain human trust by adopting certain human-like behaviours. I can expect that domestic robots will exhibit occasional mechanical, programming or functional errors, as occurs with any other electrical consumer devices. For example, these might include software errors, dropping objects due to gripper malfunctions, picking up the wrong object or showing faulty navigational skills due to unclear camera images or noisy laser scanner data respectively. It is therefore important for a domestic robot to have acceptable interactive behaviour when exhibiting and recovering from an error situation. In this context, several open questions need to be addressed regarding both individuals’ perceptions of the errors and robots, and the effects of these on people’s trust in robots. As a first step, I investigated how the severity of the consequences and the timing of a robot’s different types of erroneous behaviours during an interaction may have different impact on users’ attitudes towards a domestic robot. I concluded that there is a correlation between the magnitude of an error performed by the robot and the corresponding loss of trust of the human in the robot. In particular, people’s trust was strongly affected by robot errors that had severe consequences. This led us to investigate whether people’s awareness of robots’ functionalities may affect their trust in a robot. I found that people’s acceptance and trust in the robot may be affected by their knowledge of the robot’s capabilities and its limitations differently according the participants’ age and the robot’s embodiment. In order to deploy robots in the wild, strategies for mitigating and re-gaining people’s trust in robots in case of errors needs to be implemented. In the following three studies, I assessed if a robot with awareness of human social conventions would increase people’s trust in the robot. My findings showed that people almost blindly trusted a social and a non-social robot in scenarios with non-severe error consequences. In contrast, people that interacted with a social robot did not trust its suggestions in a scenario with a higher risk outcome. Finally, I investigated the effects of robots’ errors on people’s trust of a robot over time. The findings showed that participants’ judgement of a robot is formed during the first stage of their interaction. Therefore, people are more inclined to lose trust in a robot if it makes big errors at the beginning of the interaction. The findings from the Human-Robot Interaction experiments presented in this thesis will contribute to an advanced understanding of the trust dynamics between humans and robots for a long-lasting and successful collaboration

    Evaluating people's perceptions of trust in a robot in a repeated interactions study

    Get PDF
    Funding Information: Acknowledgment. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 642667 (Safety Enables Cooperation in Uncertain Robotic Environments - SECURE). KD acknowledges funding from the Canada 150 Research Chairs Program. Publisher Copyright: © 2020, Springer Nature Switzerland AG This is a post-peer-review, pre-copyedit version of an article published of 'Rossi A., Dautenhahn K., Koay K.L., Walters M.L., Holthaus P. (2020) Evaluating People’s Perceptions of Trust in a Robot in a Repeated Interactions Study. In: Wagner A.R. et al. (eds) Social Robotics. ICSR 2020. Lecture Notes in Computer Science, vol 12483. Springer, Cham. https://doi.org/10.1007/978-3-030-62056-1_38'Trust has been established to be a key factor in fostering human-robot interactions. However, trust can change overtime according to different factors, including a breach of trust due to a robot’s error. In this exploratory study, we observed people’s interactions with a companion robot in a real house, adapted for human-robot interaction experimentation, over three weeks. The interactions happened in six scenarios in which a robot performed different tasks under two different conditions. Each condition included fourteen tasks performed by the robot, either correctly, or with errors with severe consequences on the first or last day of interaction. At the end of each experimental condition, participants were presented with an emergency scenario to evaluate their trust in the robot. We evaluated participants’ trust in the robot by observing their decision to trust the robot during the emergency scenario, and by collecting their views through questionnaires. We concluded that there is a correlation between the timing of an error with severe consequences performed by the robot and the corresponding loss of trust of the human in the robot. In particular, people’s trust is subjected to the initial mental formation

    A matter of consequences: Understanding the effects of robot errors on people's trust in HRI

    Get PDF
    On reviewing the literature regarding acceptance and trust in human-robot interaction (HRI), there are a number of open questions that needed to be addressed in order to establish effective collaborations between humans and robots in real-world applications. In particular, we identified four principal open areas that should be investigated to create guidelines for the successful deployment of robots in the wild. These areas are focused on: 1) the robot's abilities and limitations; in particular when it makes errors with different severity of consequences, 2) individual differences, 3) the dynamics of human-robot trust, and 4) the interaction between humans and robots over time. In this paper, we present two very similar studies, one with a virtual robot with human-like abilities, and one with a Care-O-bot 4 robot. In the first study, we create an immersive narrative using an interactive storyboard to collect responses of 154 participants. In the second study, 6 participants had repeated interactions over three weeks with a physical robot. We summarise and discuss the findings of our investigations of the effects of robots' errors on people's trust in robots for designing mechanisms that allow robots to recover from a breach of trust. In particular, we observed that robots' errors had greater impact on people's trust in the robot when the errors were made at the beginning of the interaction and had severe consequences. Our results also provided insights on how these errors vary according to the individuals’ personalities, expectations and previous experiences

    Getting to know Pepper : Effects of people’s awareness of a robot’s capabilities on their trust in the robot

    Get PDF
    © 2018 Association for Computing MachineryThis work investigates how human awareness about a social robot’s capabilities is related to trusting this robot to handle different tasks. We present a user study that relates knowledge on different quality levels to participant’s ratings of trust. Secondary school pupils were asked to rate their trust in the robot after three types of exposures: a video demonstration, a live interaction, and a programming task. The study revealed that the pupils’ trust is positively affected across different domains after each session, indicating that human users trust a robot more the more awareness about the robot they have

    Testing the Error Recovery Capabilities of Robotic Speech

    Get PDF
    Trust in Human-Robot Interaction is a widely studied subject, and yet, few studies have examined the ability to speak and how it impacts trust towards a robot. Errors can have a negative impact on perceived trustworthiness of a robot. However, there seem to be mitigating effects, such as using a humanoid robot, which has been shown to be perceived as more trustworthy when having a high error-rate than a more mechanical robot with the same error- rate. We want to use a humanoid robot to test whether speech can increase anthropomorphism and mitigate the effects of errors on trust. For this purpose, we are planning an experiment where participants solve a sequence completion task, with the robot giv- ing suggestions (either verbal or non-verbal) for the solution. In addition, we want to measure whether the degree of error (slight error vs. severe error) has an impact on the participants’ behaviour and the robot’s perceived trustworthiness, since making a severe error would affect trust more than a slight error. Participants will be assigned to three groups, where we will vary the degree of accu- racy of the robot’s answers (correct vs. almost right vs. obviously wrong). They will complete ten series of a sequence completion task and rate trustworthiness and general perception (Godspeed Questionnaire) of the robot. We also present our thoughts on the implications of potential results

    Robot Broken Promise? Repair strategies for mitigating loss of trust for repeated failures

    Get PDF
    Trust repair strategies are an important part of human-robot interaction. In this study, we investigate how repeated failures impact users’ trust and how we might mitigate them. Specifically, we look at different repair strategies in the form of apologies, with additional features to them such as warnings and promises. Through an online study, we explore these repair strategies for repeated failures in the form of robot incongruence, where there is a mismatch of verbal and non-verbal information given by the robot. Our results show that such incongruent robot behaviour has a significant overall negative impact on participants’ trust. We found that the robot making a promise, and then breaking it, results in a significant decrease in participants’ trust, when compared to a general apology as a repair strategy. These findings contribute to the research on trust repair strategies and, additionally, shed light on how robot failures, in the form of incongruences, impact participants’ trust

    Exploring Human Teachers' Interpretations of Trainee Robots' Nonverbal Behaviour and Errors

    Get PDF
    In the near future, socially intelligent robots that can learn new tasks from humans may become widely available and gain an opportunity to help people more and more. In order to successfully play a role, not only should intelligent robots be able to interact effectively with humans while they are being taught, but also humans should have the assurance to trust these robots after teaching them how to perform tasks. When human students learn, they usually provide nonverbal cues to display their understanding of and interest in the material. For example, they sometimes nod, make eye contact or show meaningful facial expressions. Likewise, a humanoid robot's nonverbal social cues may enhance the learning process, in case the provided cues are legible for human teachers. To inform designing such nonverbal interaction techniques for intelligent robots, our first work investigates humans' interpretations of nonverbal cues provided by a trainee robot. Through an online experiment (with 167 participants), we examine how different gaze patterns and arm movements with various speeds and different kinds of pauses, displayed by a student robot when practising a physical task, impact teachers' understandings of the robot’s attributes. We show that a robot can appear differently in terms of its confidence, proficiency, eagerness to learn, etc., by systematically adjusting those nonverbal factors. Human students sometimes make mistakes while practising a task, but teachers may be forgiving about them. Intelligent robots are machines, and therefore, they may behave erroneously in certain situations. Our second study examines if human teachers for a robot overlook its small mistakes made when practising a recently taught task, in case the robot has already shown significant improvements. By means of an online rating experiment (with 173 participants), we first determine how severe a robot’s errors in a household task (i.e., preparing food) are perceived. We then use that information to design and conduct another experiment (with 139 participants) in which participants are given the experience of teaching trainee robots. According to our results, perceptions of teachers improve as the robots get better in performing the task. We also show that while bigger errors have a greater negative impact on human teachers' trust compared with the smaller ones, even a small error can significantly destroy trust in a trainee robot. This effect is also correlated with the personality traits of participants. The present work contributes by extending HRI knowledge concerning human teachers’ understandings of robots, in a specific teaching scenario when teachers are observing behaviours that have the primary goal of accomplishing a physical task
    • …
    corecore