4 research outputs found

    Human-in-the-loop error detection in an object organization task with a social robot

    Get PDF
    In human-robot collaboration, failures are bound to occur. A thorough understanding of potential errors is necessary so that robotic system designers can develop systems that remedy failure cases. In this work, we study failures that occur when participants interact with a working system and focus especially on errors in a robotic system’s knowledge base of which the system is not aware. A human interaction partner can be part of the error detection process if they are given insight into the robot’s knowledge and decision-making process. We investigate different communication modalities and the design of shared task representations in a joint human-robot object organization task. We conducted a user study (N = 31) in which the participants showed a Pepper robot how to organize objects, and the robot communicated the learned object configuration to the participants by means of speech, visualization, or a combination of speech and visualization. The multimodal, combined condition was preferred by 23 participants, followed by seven participants preferring the visualization. Based on the interviews, the errors that occurred, and the object configurations generated by the participants, we conclude that participants tend to test the system’s limitations by making the task more complex, which provokes errors. This trial-and-error behavior has a productive purpose and demonstrates that failures occur that arise from the combination of robot capabilities, the user’s understanding and actions, and interaction in the environment. Moreover, it demonstrates that failure can have a productive purpose in establishing better user mental models of the technology

    A multidimensional Bayesian architecture for real-time anomaly detection and recovery in mobile robot sensory systems

    Get PDF
    peer reviewedFor mobile robots to operate in an autonomous and safe manner they must be able to adequately perceive their environment despite challenging or unpredictable conditions in their sensory apparatus. Usually, this is addressed through ad-hoc, not easily generalizable Fault Detection and Diagnosis (FDD) approaches. In this work, we leverage Bayesian Networks (BNs) to propose a novel probabilistic inference architecture that provides generality, rigorous inferences and real-time performance for the detection, diagnosis and recovery of diverse and multiple sensory failures in robotic systems. Our proposal achieves all these goals by structuring a BN in a multidimensional setting that up to our knowledge deals coherently and rigorously for the first time with the following issues: modeling of complex interactions among the components of the system, including sensors, anomaly detection and recovery; representation of sensory information and other kinds of knowledge at different levels of cognitive abstraction; and management of the temporal evolution of sensory behavior. Real-time performance is achieved through the compilation of these BNs into feedforward neural networks. Our proposal has been implemented and tested for mobile robot navigation in environments with human presence, a complex task that involves diverse sensor anomalies. The results obtained from both simulated and real experiments prove that our architecture enhances the safety and robustness of robotic operation: among others, the minimum distance to pedestrians, the tracking time and the navigation time all improve statistically in the presence of anomalies, with a diversity of changes in medians ranging from ≃20% to ≃500%

    Exploring Human Teachers' Interpretations of Trainee Robots' Nonverbal Behaviour and Errors

    Get PDF
    In the near future, socially intelligent robots that can learn new tasks from humans may become widely available and gain an opportunity to help people more and more. In order to successfully play a role, not only should intelligent robots be able to interact effectively with humans while they are being taught, but also humans should have the assurance to trust these robots after teaching them how to perform tasks. When human students learn, they usually provide nonverbal cues to display their understanding of and interest in the material. For example, they sometimes nod, make eye contact or show meaningful facial expressions. Likewise, a humanoid robot's nonverbal social cues may enhance the learning process, in case the provided cues are legible for human teachers. To inform designing such nonverbal interaction techniques for intelligent robots, our first work investigates humans' interpretations of nonverbal cues provided by a trainee robot. Through an online experiment (with 167 participants), we examine how different gaze patterns and arm movements with various speeds and different kinds of pauses, displayed by a student robot when practising a physical task, impact teachers' understandings of the robot’s attributes. We show that a robot can appear differently in terms of its confidence, proficiency, eagerness to learn, etc., by systematically adjusting those nonverbal factors. Human students sometimes make mistakes while practising a task, but teachers may be forgiving about them. Intelligent robots are machines, and therefore, they may behave erroneously in certain situations. Our second study examines if human teachers for a robot overlook its small mistakes made when practising a recently taught task, in case the robot has already shown significant improvements. By means of an online rating experiment (with 173 participants), we first determine how severe a robot’s errors in a household task (i.e., preparing food) are perceived. We then use that information to design and conduct another experiment (with 139 participants) in which participants are given the experience of teaching trainee robots. According to our results, perceptions of teachers improve as the robots get better in performing the task. We also show that while bigger errors have a greater negative impact on human teachers' trust compared with the smaller ones, even a small error can significantly destroy trust in a trainee robot. This effect is also correlated with the personality traits of participants. The present work contributes by extending HRI knowledge concerning human teachers’ understandings of robots, in a specific teaching scenario when teachers are observing behaviours that have the primary goal of accomplishing a physical task
    corecore