480 research outputs found

    The impact of peoples' personal dispositions and personalities on their trust of robots in an emergency scenario

    Get PDF
    Humans should be able to trust that they can safely interact with their home companion robot. However, robots can exhibit occasional mechanical, programming or functional errors. We hypothesise that the severity of the consequences and the timing of a robot's different types of erroneous behaviours during an interaction may have different impacts on users' attitudes towards a domestic robot. First, we investigated human users' perceptions of the severity of various categories of potential errors that are likely to be exhibited by a domestic robot. Second, we used an interactive storyboard to evaluate participants' degree of trust in the robot after it performed tasks either correctly, or with 'small' or 'big' errors. Finally, we analysed the correlation between participants' responses regarding their personality, predisposition to trust other humans, their perceptions of robots, and their interaction with the robot. We conclude that there is correlation between the magnitude of an error performed by a robot and the corresponding loss of trust by the human towards the robot. Moreover we observed that some traits of participants' personalities (conscientiousness and agreeableness) and their disposition of trusting other humans (benevolence) significantly increased their tendency to trust a robot more during an emergency scenario.Peer reviewe

    Investigating Human Perceptions of Trust and Social Cues in Robots for Safe Human-Robot Interaction in Human-oriented Environments

    Get PDF
    As robots increasingly take part in daily living activities, humans will have to interact with them in domestic and other human-oriented environments. This thesis envisages a future where autonomous robots could be used as home companions to assist and collaborate with their human partners in unstructured environments without the support of any roboticist or expert. To realise such a vision, it is important to identify which factors (e.g. trust, participants’ personalities and background etc.) that influence people to accept robots’ as companions and trust the robots to look after their well-being. I am particularly interested in the possibility of robots using social behaviours and natural communications as a repair mechanism to positively influence humans’ sense of trust and companionship towards the robots. The main reason being that trust can change over time due to different factors (e.g. perceived erroneous robot behaviours). In this thesis, I provide guidelines for a robot to regain human trust by adopting certain human-like behaviours. I can expect that domestic robots will exhibit occasional mechanical, programming or functional errors, as occurs with any other electrical consumer devices. For example, these might include software errors, dropping objects due to gripper malfunctions, picking up the wrong object or showing faulty navigational skills due to unclear camera images or noisy laser scanner data respectively. It is therefore important for a domestic robot to have acceptable interactive behaviour when exhibiting and recovering from an error situation. In this context, several open questions need to be addressed regarding both individuals’ perceptions of the errors and robots, and the effects of these on people’s trust in robots. As a first step, I investigated how the severity of the consequences and the timing of a robot’s different types of erroneous behaviours during an interaction may have different impact on users’ attitudes towards a domestic robot. I concluded that there is a correlation between the magnitude of an error performed by the robot and the corresponding loss of trust of the human in the robot. In particular, people’s trust was strongly affected by robot errors that had severe consequences. This led us to investigate whether people’s awareness of robots’ functionalities may affect their trust in a robot. I found that people’s acceptance and trust in the robot may be affected by their knowledge of the robot’s capabilities and its limitations differently according the participants’ age and the robot’s embodiment. In order to deploy robots in the wild, strategies for mitigating and re-gaining people’s trust in robots in case of errors needs to be implemented. In the following three studies, I assessed if a robot with awareness of human social conventions would increase people’s trust in the robot. My findings showed that people almost blindly trusted a social and a non-social robot in scenarios with non-severe error consequences. In contrast, people that interacted with a social robot did not trust its suggestions in a scenario with a higher risk outcome. Finally, I investigated the effects of robots’ errors on people’s trust of a robot over time. The findings showed that participants’ judgement of a robot is formed during the first stage of their interaction. Therefore, people are more inclined to lose trust in a robot if it makes big errors at the beginning of the interaction. The findings from the Human-Robot Interaction experiments presented in this thesis will contribute to an advanced understanding of the trust dynamics between humans and robots for a long-lasting and successful collaboration

    A matter of consequences: Understanding the effects of robot errors on people's trust in HRI

    Get PDF
    On reviewing the literature regarding acceptance and trust in human-robot interaction (HRI), there are a number of open questions that needed to be addressed in order to establish effective collaborations between humans and robots in real-world applications. In particular, we identified four principal open areas that should be investigated to create guidelines for the successful deployment of robots in the wild. These areas are focused on: 1) the robot's abilities and limitations; in particular when it makes errors with different severity of consequences, 2) individual differences, 3) the dynamics of human-robot trust, and 4) the interaction between humans and robots over time. In this paper, we present two very similar studies, one with a virtual robot with human-like abilities, and one with a Care-O-bot 4 robot. In the first study, we create an immersive narrative using an interactive storyboard to collect responses of 154 participants. In the second study, 6 participants had repeated interactions over three weeks with a physical robot. We summarise and discuss the findings of our investigations of the effects of robots' errors on people's trust in robots for designing mechanisms that allow robots to recover from a breach of trust. In particular, we observed that robots' errors had greater impact on people's trust in the robot when the errors were made at the beginning of the interaction and had severe consequences. Our results also provided insights on how these errors vary according to the individuals’ personalities, expectations and previous experiences

    Human-robot interaction: How do personality traits affect attitudes towards robot?

    Get PDF
    The robot technology seems to be an important part of daily life and has shown great progress in recent years. Robots are used in a lot of parts of life. Thus, we need to think and know how robots will affect human life and how human will react to robots. This study focused on human’s attitude toward robots. The first purpose of this study is to determine participants’ attitude towards robots and second is to investigate how personality traits predict their attitudes towards robots. Participants consisted of 219 (142 female and 77 male) university students. Of the participants were university students and their age was between 18-26 years old (mean age=20.54, SD=1.22). Negative Attitude towards Robot Scale and Quick Big Five Personality Test were used to collect data. Results indicated that gender, extraversion and openness to experience are important factors for participants’ attitude towards robots. Considering speed technological development we need more researches to evaluate correctly human-robot interactions. ÖzetRobot teknolojisi günlük yaşamın önemli bir parçası olarak görünmektedir ve son yıllarda büyük ilerleme göstermiştir. Robotlar yaşamın pek çok alanında kullanılmaktadır. Bundan dolayı robotların insan yaşamına nasıl etkide bulunduğunu ve insanların robotlara karşı nasıl teki verdiğini düşünmeye ve bilmeye ihtiyacımız vardır. Bu çalışma insanların robotlara karşı tutumları üzerine odaklanmıştır. Bu çalışmanın ilk amacı katılımcıların robotlara karşı tutumlarını belirlemek ve ikinci amacı da katılımcıların kişilik özelliklerinin robotlara karşı tutumlarını nasıl yordadığını incelemektir. Veriler 219 (142 kadın ve 77 erkek) üniversite öğrencisi katılımcıdan toplanmıştır. Katılımcıların yaş aralığı 18-26’dır (ort. yaş=20.54, SS=1.22). Robota karşı Olumsuz Tutum Ölçeği ve Hızlı Büyük Beşli Kişilik Testi kullanılmıştır. Sonuçlar katılımcıların robotlara karşı tutumunda cinsiyetin, dışadönüklüğün ve deneyimlere açıklığın önemli faktörler olduğunu göstermiştir. Hızlı teknolojik gelişmeler göz önünde bulundurulduğunda insan-robot etkileşimini doğru bir şekilde değerlendirmek için daha fazla araştırmaya ihtiyacımız vardır

    Exploring Human Teachers' Interpretations of Trainee Robots' Nonverbal Behaviour and Errors

    Get PDF
    In the near future, socially intelligent robots that can learn new tasks from humans may become widely available and gain an opportunity to help people more and more. In order to successfully play a role, not only should intelligent robots be able to interact effectively with humans while they are being taught, but also humans should have the assurance to trust these robots after teaching them how to perform tasks. When human students learn, they usually provide nonverbal cues to display their understanding of and interest in the material. For example, they sometimes nod, make eye contact or show meaningful facial expressions. Likewise, a humanoid robot's nonverbal social cues may enhance the learning process, in case the provided cues are legible for human teachers. To inform designing such nonverbal interaction techniques for intelligent robots, our first work investigates humans' interpretations of nonverbal cues provided by a trainee robot. Through an online experiment (with 167 participants), we examine how different gaze patterns and arm movements with various speeds and different kinds of pauses, displayed by a student robot when practising a physical task, impact teachers' understandings of the robot’s attributes. We show that a robot can appear differently in terms of its confidence, proficiency, eagerness to learn, etc., by systematically adjusting those nonverbal factors. Human students sometimes make mistakes while practising a task, but teachers may be forgiving about them. Intelligent robots are machines, and therefore, they may behave erroneously in certain situations. Our second study examines if human teachers for a robot overlook its small mistakes made when practising a recently taught task, in case the robot has already shown significant improvements. By means of an online rating experiment (with 173 participants), we first determine how severe a robot’s errors in a household task (i.e., preparing food) are perceived. We then use that information to design and conduct another experiment (with 139 participants) in which participants are given the experience of teaching trainee robots. According to our results, perceptions of teachers improve as the robots get better in performing the task. We also show that while bigger errors have a greater negative impact on human teachers' trust compared with the smaller ones, even a small error can significantly destroy trust in a trainee robot. This effect is also correlated with the personality traits of participants. The present work contributes by extending HRI knowledge concerning human teachers’ understandings of robots, in a specific teaching scenario when teachers are observing behaviours that have the primary goal of accomplishing a physical task

    Using the Aesthetic Stance to Achieve Historical Thinking

    Get PDF
    This research study focuses on how an aesthetic reading stance with dystopian literature can aid teens in the development of historical thinking skills. My research is based on ideas from Louise Rosenblatt’s transactional theory and Sam Wineburg’s concept and definition for historical thinking along with the UCLA Standards for Historical Thinking. Historical thinking requires students to gain factual information but also experiences. As a social studies teacher, this practitioner inquiry study created an opportunity to explore how I might position students into the intellectual mindsets of historical thinking through fictional reading in the aesthetic stance. This study provided students the opportunity to read dystopian literature in a government class. The goal was for students to experience other peoples and societies and explore what it might mean to be a citizen in any society. The written student responses demonstrated that students made connections to course content, personal experiences, and the larger social and political world. The student responses demonstrated that the fictional readings in dystopian literature became a part of their personal experiences. By creating opportunities for reading in the aesthetic stance, my students experienced the lives of citizens in different societies. This curriculum case study was my experiment with aesthetic reading experiences and whether they guided students to reach the goals of historical thinking and comparative government due to lived through experiences in dystopian societies. I conclude this study by drawing connections to the teaching of empathy and independent reading
    corecore