66 research outputs found

    Cooperative robots in people guidance mission: DTM model validation and local optimization motion

    Get PDF
    This work presents a novel approach for optimizing locally the work of cooperative robots and obtaining the minimum displacement of humans in a guiding people mission. This problem is addressed by introducing a “Discrete Time Motion” model (DTM) and a new cost function that minimizes the work required by robots for leading and regrouping people. Furthermore, an analysis of forces actuating among robots and humans is presented throughout simulations of different situations of robot and human configurations and behaviors. Finally, we describe the process of modeling and validation by simulation that have been used to explore the new possibilities of interaction when humans are guided by teams of robots that work cooperatively in urban areas.Peer ReviewedPostprint (published version

    On-line adaptive side-by-side human robot companion to approach a moving person to interact

    Get PDF
    The final publication is available at link.springer.comIn this paper, we present an on-line adaptive side-by-side human-robot companion to approach a moving person to interact with. Our framework makes the pair robot-human capable of overpass, in a joint way, the dynamic and static obstacles of the environment while they reach a moving goal, which is the person who wants to interact with the pair. We have defined a new moving final goal that depends on the environment, the movement of the group and the movement of the interacting person. Moreover, we modified the Extended Social Force model to include this new moving goal. The method has been validated over several situations in simulation. This work is an extension of the On-line adaptive side-by-side human robot companion in dynamic urban environments, IROS2017.Peer ReviewedPostprint (author's final draft

    Social-aware robot navigation in urban environments

    Get PDF
    In this paper we present a novel robot navigation approach based on the so-called Social Force Model (SFM). First, we construct a graph map with a set of destinations that completely describe the navigation environment. Second, we propose a robot navigation algorithm, called social-aware navigation, which is mainly driven by the social-forces centered at the robot. Third, we use a MCMC Metropolis-Hastings algorithm in order to learn the parameters values of the method. Finally, the validation of the model is accomplished throughout an extensive set of simulations and real-life experiments.Peer ReviewedPostprint (author’s final draft

    Efficient hand gesture recognition for human-robot interaction

    Get PDF
    In this paper, we present an efficient and reliable deep-learning approach that allows users to communicate with robots via hand gesture recognition. Contrary to other works which use external devices such as gloves [1] or joysticks [2] to tele-operate robots, the proposed approach uses only visual information to recognize user's instructions that are encoded in a set of pre-defined hand gestures. Particularly, the method consists of two modules which work sequentially to extract 2D landmarks of hands –ie. joints positions– and to predict the hand gesture based on a temporal representation of them. The approach has been validated in a recent state-of-the-art dataset where it outperformed other methods that use multiple pre-processing steps such as optical flow and semantic segmentation. Our method achieves an accuracy of 87.5% and runs at 10 frames per second. Finally, we conducted real-life experiments with our IVO robot to validate the framework during the interaction process.Peer ReviewedPostprint (published version

    Robot companion: a social-force based approach with human awareness-navigation in crowded environments

    Get PDF
    Robots accompanying humans is one of the core capacities every service robot deployed in urban settings should have. We present a novel robot companion approach based on the so-called Social Force Model (SFM). A new model of robot-person interaction is obtained using the SFM which is suited for our robots Tibi and Dabo. Additionally, we propose an interactive scheme for robot’s human-awareness navigation using the SFM and prediction information. Moreover, we present a new metric to evaluate the robot companion performance based on vital spaces and comfortableness criteria. Also, a multimodal human feedback is proposed to enhance the behavior of the system. The validation of the model is accomplished throughout an extensive set of simulations and real-life experiments.Peer ReviewedPostprint (author’s final draft

    Social robot navigation tasks: combining machine learning techniques and social force model

    Get PDF
    © 2021 by the authors. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/)Social robot navigation in public spaces, buildings or private houses is a difficult problem that is not well solved due to environmental constraints (buildings, static objects etc.), pedestrians and other mobile vehicles. Moreover, robots have to move in a human-aware manner—that is, robots have to navigate in such a way that people feel safe and comfortable. In this work, we present two navigation tasks, social robot navigation and robot accompaniment, which combine machine learning techniques with the Social Force Model (SFM) allowing human-aware social navigation. The robots in both approaches use data from different sensors to capture the environment knowledge as well as information from pedestrian motion. The two navigation tasks make use of the SFM, which is a general framework in which human motion behaviors can be expressed through a set of functions depending on the pedestrians’ relative and absolute positions and velocities. Additionally, in both social navigation tasks, the robot’s motion behavior is learned using machine learning techniques: in the first case using supervised deep learning techniques and, in the second case, using Reinforcement Learning (RL). The machine learning techniques are combined with the SFM to create navigation models that behave in a social manner when the robot is navigating in an environment with pedestrians or accompanying a person. The validation of the systems was performed with a large set of simulations and real-life experiments with a new humanoid robot denominated IVO and with an aerial robot. The experiments show that the combination of SFM and machine learning can solve human-aware robot navigation in complex dynamic environments.This research was supported by the grant MDM-2016-0656 funded by MCIN/AEI / 10.13039/501100011033, the grant ROCOTRANSP PID2019-106702RB-C21 funded by MCIN/AEI/ 10.13039/501100011033 and the grant CANOPIES H2020-ICT-2020-2-101016906 funded by the European Union.Peer ReviewedPostprint (published version

    Interactive multiple object learning with scanty human supervision

    Get PDF
    © 2016. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/We present a fast and online human-robot interaction approach that progressively learns multiple object classifiers using scanty human supervision. Given an input video stream recorded during the human robot interaction, the user just needs to annotate a small fraction of frames to compute object specific classifiers based on random ferns which share the same features. The resulting methodology is fast (in a few seconds, complex object appearances can be learned), versatile (it can be applied to unconstrained scenarios), scalable (real experiments show we can model up to 30 different object classes), and minimizes the amount of human intervention by leveraging the uncertainty measures associated to each classifier.; We thoroughly validate the approach on synthetic data and on real sequences acquired with a mobile platform in indoor and outdoor scenarios containing a multitude of different objects. We show that with little human assistance, we are able to build object classifiers robust to viewpoint changes, partial occlusions, varying lighting and cluttered backgrounds. (C) 2016 Elsevier Inc. All rights reserved.Peer ReviewedPostprint (author's final draft

    Searching and tracking people in urban environments with static and dynamic obstacles

    Get PDF
    © . This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/Searching and tracking people in crowded urban areas where they can be occluded by static or dynamic obstacles is an important behavior for social robots which assist humans in urban outdoor environments. In this work, we propose a method that can handle in real-time searching and tracking people using a Highest Belief Particle Filter Searcher and Tracker. It makes use of a modified Particle Filter (PF), which, in contrast to other methods, can do both searching and tracking of a person under uncertainty, with false negative detections, lack of a person detection, in continuous space and real-time. Moreover, this method uses dynamic obstacles to improve the predicted possible location of the person. Comparisons have been made with our previous method, the Adaptive Highest Belief Continuous Real-time POMCP Follower, in different conditions and with dynamic obstacles. Real-life experiments have been done during two weeks with a mobile service robot in two urban environments of Barcelona with other people walking around.Peer ReviewedPostprint (author's final draft

    Robot social-aware navigation framework to accompany people walking side-by-side

    Get PDF
    The final publication is available at link.springer.comWe present a novel robot social-aware navigation framework to walk side-by-side with people in crowded urban areas in a safety and natural way. The new system includes the following key issues: to propose a new robot social-aware navigation model to accompany a person; to extend the Social Force Model,Peer ReviewedPostprint (author's final draft

    Searching and tracking people with cooperative mobile robots

    Get PDF
    The final publication is available at link.springer.comSocial robots should be able to search and track people in order to help them. In this paper we present two different techniques for coordinated multi-robot teams for searching and tracking people. A probability map (belief) of a target person location is maintained, and to initialize and update it, two methods were implemented and tested: one based on a reinforcement learning algorithm and the other based on a particle filter. The person is tracked if visible, otherwise an exploration is done by making a balance, for each candidate location, between the belief, the distance, and whether close locations are explored by other robots of the team. The validation of the approach was accomplished throughout an extensive set of simulations using up to five agents and a large amount of dynamic obstacles; furthermore, over three hours of real-life experiments with two robots searching and tracking were recorded and analysed.Peer ReviewedPostprint (author's final draft
    • …
    corecore