7 research outputs found

    Survivor Buddy and SciGirls: Affect, Outreach, and Questions

    Get PDF
    This paper describes the Survivor Buddy human-robot interaction project and how it was used by four middle-school girls to illustrate the scientific process for an episode of “SciGirls”, a Public Broadcast System science reality show. Survivor Buddy is a four degree of freedom robot head, with the face being a MIMO 740 multi-media touch screen monitor. It is being used to explore consistency and trust in the use of robots as social mediums, where robots serve as intermediaries between dependents (e.g., trapped survivors) and the outside world (doctors, rescuers, family members). While the SciGirl experimentation was neither statistically significant nor rigorously controlled, the experience makes three contributions. It introduces the Survivor Buddy project and social medium role, it illustrates that human-robot interaction is an appealing way to make robotics more accessible to the general public, and raises interesting questions about the existence of a minimum set of degrees of freedom for sufficient expressiveness, the relative importance of voice versus non-verbal affect, and the range and intensity of robot motions

    Integrating Affective Expressions into Robot-Assisted Search and Rescue to Improve Human-Robot Communication

    Get PDF
    Unexplained or ambiguous behaviours of rescue robots can lead to inefficient collaborations between humans and robots in robot-assisted SAR teams. To date, rescue robots do not have the ability to interact with humans on a social level, which is believed to be an essential ability that can improve the quality of interactions. This thesis research proposes to bring affective robot expressions into the SAR context to provide rescue robots social capabilities. The first experiment presented in Chapter 3 investigates whether there is consensus in mapping emotions to messages/situations in Urban Search and Rescue (USAR) scenarios, where efficiency and effectiveness of interactions are crucial to success. We studied mappings between 10 specific messages, presented in two different communication styles, reflecting common situations that might happen during search and rescue missions and the emotions exhibited by robots in those situations. The data was obtained through a Mechanical Turk study with 78 participants. The findings support the feasibility of using emotions as an additional communication channel to improve multi-modal human-robot interaction for urban search and rescue robots and suggest that these mappings are robust, i.e., are not affected by the robot’s communication style. The second experiment was conducted on Amazon Mechanical Turk as well with 223 participants. We used Affect Control Theory (ACT) as a method for deriving the mappings between situations and emotions (similar to the ones in the first experiment) and as an alternative method to obtaining mappings that can be adjusted for different emotion sets (Chapter 4). The results suggested that there is consistency in the choice of emotions for a robot to show in different situations between the two methods used in the first and second experiment, indicating the feasibility of using emotions as an additional modality in SAR robots. After validating the feasibility of bringing emotions to SAR context based on the findings from the first two experiments, we created affective expressions based on Evaluation, Potency and Activity (EPA) dimensions of ACT with the help of LED lights on a rescue robot called Husky. We evaluated the effect of emotions on rescue workers’ situational awareness through an online Amazon Mechanical Turk Study with 151 participants (Chapter 5). Findings indicated that participants who saw Husky with affective expressions (conveyed through lights) had better perception accuracy of the situation happening in the disaster scene than participants who saw the videos of the Husky robot without any affective lights. In other words, Husky with affective lights improved participants’ situational awareness

    High Social Acceptance of Head Gaze Loosely Synchronized with Speech for Social Robots

    Get PDF
    This research demonstrates that robots can achieve socially acceptable interactions, using loosely synchronized head gaze-speech, without understanding the semantics of the dialog. Prior approaches used tightly synchronized head gaze-speech, which requires significant human effort and time to manually annotate synchronization events in advance, restricting interactive dialog, and requiring the operator to act as a puppeteer. This approach has two novel aspects. First, it uses affordances in the sentence structure, time delays, and typing to achieve autonomous synchronization of head gaze-speech. Second, it is implemented within a behavioral robotics framework derived from 32 previous implementations. The efficacy of the loosely synchronized approach was validated through a 93-participant 1 x 3 (loosely synchronized head gaze-speech, tightly synchronized head gaze-speech, no-head gazespeech) between-subjects experiment using the “Survivor Buddy” rescue robot in a victim management scenario. The results indicated that the social acceptance of loosely synchronized head gaze-speech is similar to tightly synchronized head gazespeech (manual annotation), and preferred to the no head gaze-speech case. These findings contribute to the study of social robotics in three ways. First, the research overall contributes to a fundamental understanding of the role of social head gaze in social acceptance, and the production of social head gaze. Second, it shows that autonomously generated head gaze-speech coordination is both possible and acceptable. Third, the behavioral robotics framework simplifies creation, analysis, and comparison of implementations

    Human factors of semi-autonomous robots for urban search and rescue

    Get PDF
    During major disasters or other emergencies, Urban Search and Rescue (USAR) teams are responsible for extricating casualties safely from collapsed urban structures. The rescue work is dangerous due to possible further collapse, fire, dust or electricity hazards. Sometimes the necessary precautions and checks can last several hours before rescuers are safe to start the search for survivors. Remote controlled rescue robots provide the opportunity to support human rescuers to search the site for trapped casualties while they remain in a safe place. The research reported in this thesis aimed to understand how robot behaviour and interface design can be applied to utilise the benefits of robot autonomy and how to inform future human-robot collaborative systems. The data was analysed in the context of USAR missions when using semi-autonomous remote controlled robot systems. The research focussed on the influence of robot feedback, robot reliability, task complexity, and transparency. The influence of these factors on trust, workload, and performance was examined. The overall goal of the research was to make the life of rescuers safer and enhance their performance to help others in distress. Data obtained from the studies conducted for this thesis showed that semi-autonomous robot reliability is still the most dominant factor influencing trust, workload, and team performance. A robot with explanatory feedback was perceived as more competent, more efficient and less malfunctioning. The explanatory feedback was perceived as a clearer type of communication compared to concise robot feedback. Higher levels of robot transparency were perceived as more trustworthy. However, single items on the trust questionnaire were manipulated and further investigation is necessary. However, neither explanatory feedback from the robot nor robot transparency, increased team performance or mediated workload levels. Task complexity mainly influenced human-robot team performance and the participants’ control allocation strategy. Participants allowed the robot to find more targets and missed more robot errors in the high complexity conditions compared to the low task complexity conditions. Participants found more targets manually in the low complexity tasks. In addition, the research showed that recording the observed robot performance (the performance of the robot that was witnessed by the participant) can help to identify the cause of contradicting results: participants might not have noticed some of the robots mistakes and therefore they were not able to distinguish between the robot reliability levels. Furthermore, the research provided a foundation of knowledge regarding the real world application of USAR in the United Kingdom. This included collecting knowledge via an autoethnographic approach about working processes, command structures, currently used technical equipment, and attitudes of rescuers towards robots. Also, recommendations about robot behaviour and interface design were collected throughout the research. However, recommendations made in the thesis include consideration of the overall outcome (mission performance) and the perceived usefulness of the system in order to support the uptake of the technology in real world applications. In addition, autonomous features might not be appropriate in all USAR applications. When semi-autonomous robot trials were compared to entirely manual operation, only the robot with an average of 97% reliability significantly increased the team performance and reduced the time needed to complete the USAR scenario compared to the manually operated robot. Unfortunately, such high robot success levels do not exist to date. This research has contributed to our understanding of the factors influencing human-robot collaboration in USAR operations, and provided guidance for the next generation of autonomous robots

    Human factors of semi-autonomous robots for urban search and rescue

    Get PDF
    During major disasters or other emergencies, Urban Search and Rescue (USAR) teams are responsible for extricating casualties safely from collapsed urban structures. The rescue work is dangerous due to possible further collapse, fire, dust or electricity hazards. Sometimes the necessary precautions and checks can last several hours before rescuers are safe to start the search for survivors. Remote controlled rescue robots provide the opportunity to support human rescuers to search the site for trapped casualties while they remain in a safe place. The research reported in this thesis aimed to understand how robot behaviour and interface design can be applied to utilise the benefits of robot autonomy and how to inform future human-robot collaborative systems. The data was analysed in the context of USAR missions when using semi-autonomous remote controlled robot systems. The research focussed on the influence of robot feedback, robot reliability, task complexity, and transparency. The influence of these factors on trust, workload, and performance was examined. The overall goal of the research was to make the life of rescuers safer and enhance their performance to help others in distress. Data obtained from the studies conducted for this thesis showed that semi-autonomous robot reliability is still the most dominant factor influencing trust, workload, and team performance. A robot with explanatory feedback was perceived as more competent, more efficient and less malfunctioning. The explanatory feedback was perceived as a clearer type of communication compared to concise robot feedback. Higher levels of robot transparency were perceived as more trustworthy. However, single items on the trust questionnaire were manipulated and further investigation is necessary. However, neither explanatory feedback from the robot nor robot transparency, increased team performance or mediated workload levels. Task complexity mainly influenced human-robot team performance and the participants’ control allocation strategy. Participants allowed the robot to find more targets and missed more robot errors in the high complexity conditions compared to the low task complexity conditions. Participants found more targets manually in the low complexity tasks. In addition, the research showed that recording the observed robot performance (the performance of the robot that was witnessed by the participant) can help to identify the cause of contradicting results: participants might not have noticed some of the robots mistakes and therefore they were not able to distinguish between the robot reliability levels. Furthermore, the research provided a foundation of knowledge regarding the real world application of USAR in the United Kingdom. This included collecting knowledge via an autoethnographic approach about working processes, command structures, currently used technical equipment, and attitudes of rescuers towards robots. Also, recommendations about robot behaviour and interface design were collected throughout the research. However, recommendations made in the thesis include consideration of the overall outcome (mission performance) and the perceived usefulness of the system in order to support the uptake of the technology in real world applications. In addition, autonomous features might not be appropriate in all USAR applications. When semi-autonomous robot trials were compared to entirely manual operation, only the robot with an average of 97% reliability significantly increased the team performance and reduced the time needed to complete the USAR scenario compared to the manually operated robot. Unfortunately, such high robot success levels do not exist to date. This research has contributed to our understanding of the factors influencing human-robot collaboration in USAR operations, and provided guidance for the next generation of autonomous robots
    corecore