355 research outputs found

    Utilising Explanations to Mitigate Robot Conversational Failures

    Full text link
    This paper presents an overview of robot failure detection work from HRI and adjacent fields using failures as an opportunity to examine robot explanation behaviours. As humanoid robots remain experimental tools in the early 2020s, interactions with robots are situated overwhelmingly in controlled environments, typically studying various interactional phenomena. Such interactions suffer from real-world and large-scale experimentation and tend to ignore the 'imperfectness' of the everyday user. Robot explanations can be used to approach and mitigate failures, by expressing robot legibility and incapability, and within the perspective of common-ground. In this paper, I discuss how failures present opportunities for explanations in interactive conversational robots and what the potentials are for the intersection of HRI and explainability research

    Monocular Robust Depth Estimation Vision System for Robotic Tasks Interventions in Metallic Targets

    Get PDF
    Robotic interventions in hazardous scenarios need to pay special attention to safety, as in most cases it is necessary to have an expert operator in the loop. Moreover, the use of a multi-modal Human-Robot Interface allows the user to interact with the robot using manual control in critical steps, as well as semi-autonomous behaviours in more secure scenarios, by using, for example, object tracking and recognition techniques. This paper describes a novel vision system to track and estimate the depth of metallic targets for robotic interventions. The system has been designed for on-hand monocular cameras, focusing on solving lack of visibility and partial occlusions. This solution has been validated during real interventions at the Centre for Nuclear Research (CERN) accelerator facilities, achieving 95% success in autonomous mode and 100% in a supervised manner. The system increases the safety and efficiency of the robotic operations, reducing the cognitive fatigue of the operator during non-critical mission phases. The integration of such an assistance system is especially important when facing complex (or repetitive) tasks, in order to reduce the work load and accumulated stress of the operator, enhancing the performance and safety of the mission

    Using a Deep Learning Model on Images to Obtain a 2D Laser People Detector for a Mobile Robot

    Get PDF
    Recent improvements in deep learning techniques applied to images allow the detection of people with a high success rate. However, other types of sensors, such as laser rangefinders, are still useful due to their wide field of vision and their ability to operate in different environments and lighting conditions. In this work we use an interesting computational intelligence technique such as the deep learning method to detect people in images taken by a mobile robot. The masks of the people in the images are used to automatically label a set of samples formed by 2D laser range data that will allow us to detect the legs of people present in the scene. The samples are geometric characteristics of the clusters built from the laser data. The machine learning algorithms are used to learn a classifier that is capable of detecting people from only 2D laser range data. Our people detector is compared to a state-of-the-art classifier. Our proposal achieves a higher value of F1 in the test set using an unbalanced dataset. To improve accuracy, the final classifier has been generated from a balanced training set. This final classifier has also been evaluated using a test set in which we have obtained very high accuracy values in each class. The contribution of this work is 2-fold. On the one hand, our proposal performs an automatic labeling of the samples so that the dataset can be collected under real operating conditions. On the other hand, the robot can detect people in a wider field of view than if we only used a camera, and in this way can help build more robust behaviors.This work has been supported by the Spanish Government TIN2016- 76515-R Grant, supported with Feder funds

    Robotic ubiquitous cognitive ecology for smart homes

    Get PDF
    Robotic ecologies are networks of heterogeneous robotic devices pervasively embedded in everyday environments, where they cooperate to perform complex tasks. While their potential makes them increasingly popular, one fundamental problem is how to make them both autonomous and adaptive, so as to reduce the amount of preparation, pre-programming and human supervision that they require in real world applications. The project RUBICON develops learning solutions which yield cheaper, adaptive and efficient coordination of robotic ecologies. The approach we pursue builds upon a unique combination of methods from cognitive robotics, machine learning, planning and agent- based control, and wireless sensor networks. This paper illustrates the innovations advanced by RUBICON in each of these fronts before describing how the resulting techniques have been integrated and applied to a smart home scenario. The resulting system is able to provide useful services and pro-actively assist the users in their activities. RUBICON learns through an incremental and progressive approach driven by the feed- back received from its own activities and from the user, while also self-organizing the manner in which it uses available sensors, actuators and other functional components in the process. This paper summarises some of the lessons learned by adopting such an approach and outlines promising directions for future work

    Semiotics and Human-Robot Interaction

    Get PDF
    Keywords: Semi-autonomous robot, human-robot interaction, semiotics. Abstract: This paper describes a robot control architecture supported on a human-robot interaction model obtained directly from semiotics concepts. The architecture is composed of a set of objects defined after a semiotic sign model. Simulation experiments using unicycle robots are presented that illustrate the interactions within a team of robots equipped with skills similar to those used in human-robot interactions.

    Detecting Biological Motion for Human-Robot Interaction: A Link between Perception and Action

    Get PDF
    One of the fundamental skills supporting safe and comfortable interaction between humans is their capability to understand intuitively each other's actions and intentions. At the basis of this ability is a special-purpose visual processing that human brain has developed to comprehend human motion. Among the first "building blocks" enabling the bootstrapping of such visual processing is the ability to detect movements performed by biological agents in the scene, a skill mastered by human babies in the first days of their life. In this paper, we present a computational model based on the assumption that such visual ability must be based on local low-level visual motion features, which are independent of shape, such as the configuration of the body and perspective. Moreover, we implement it on the humanoid robot iCub, embedding it into a software architecture that leverages the regularities of biological motion also to control robot attention and oculomotor behaviors. In essence, we put forth a model in which the regularities of biological motion link perception and action enabling a robotic agent to follow a human-inspired sensory-motor behavior. We posit that this choice facilitates mutual understanding and goal prediction during collaboration, increasing the pleasantness and safety of the interactio

    Reinforcement Learning Approaches in Social Robotics

    Full text link
    This article surveys reinforcement learning approaches in social robotics. Reinforcement learning is a framework for decision-making problems in which an agent interacts through trial-and-error with its environment to discover an optimal behavior. Since interaction is a key component in both reinforcement learning and social robotics, it can be a well-suited approach for real-world interactions with physically embodied social robots. The scope of the paper is focused particularly on studies that include social physical robots and real-world human-robot interactions with users. We present a thorough analysis of reinforcement learning approaches in social robotics. In addition to a survey, we categorize existent reinforcement learning approaches based on the used method and the design of the reward mechanisms. Moreover, since communication capability is a prominent feature of social robots, we discuss and group the papers based on the communication medium used for reward formulation. Considering the importance of designing the reward function, we also provide a categorization of the papers based on the nature of the reward. This categorization includes three major themes: interactive reinforcement learning, intrinsically motivated methods, and task performance-driven methods. The benefits and challenges of reinforcement learning in social robotics, evaluation methods of the papers regarding whether or not they use subjective and algorithmic measures, a discussion in the view of real-world reinforcement learning challenges and proposed solutions, the points that remain to be explored, including the approaches that have thus far received less attention is also given in the paper. Thus, this paper aims to become a starting point for researchers interested in using and applying reinforcement learning methods in this particular research field
    corecore