79 research outputs found

    Cooperative social robots: accompanying, guiding and interacting with people

    Get PDF
    The development of social robots capable of interacting with humans is one of the principal challenges in the field of robotics. More and more, robots are appearing in dynamic environments, like pedestrian walkways, universities, and hospitals; for this reason, their interaction with people must be conducted in a natural, gradual, and cordial manner, given that their function could be aid, or assist people. Therefore, navigation and interaction among humans in these environments are key skills that future generations of robots will require to have. Additionally, robots must also be able to cooperate with each other, if necessary. This dissertation examines these various challenges and describes the development of a set of techniques that allow robots to interact naturally with people in their environments, as they guide or accompany humans in urban zones. In this sense, the robots' movements are inspired by the persons' actions and gestures, determination of appropriate personal space, and the rules of common social convention. The first issue this thesis tackles is the development of an innovative robot-companion approach based on the newly founded Extended Social-Forces Model. We evaluate how people navigate and we formulate a set of virtual social forces to describe robot's behavior in terms of motion. Moreover, we introduce a robot companion analytical metric to effectively evaluate the system. This assessment is based on the notion of "proxemics" and ensures that the robot's navigation is socially acceptable by the person being accompanied, as well as to other pedestrians in the vicinity. Through a user study, we show that people interpret the robot's behavior according to human social norms. In addition, a new framework for guiding people in urban areas with a set of cooperative mobile robots is presented. The proposed approach offers several significant advantages, as compared with those outlined in prior studies. Firstly, it allows a group of people to be guided within both open and closed areas; secondly, it uses several cooperative robots; and thirdly, it includes features that enable the robots to keep people from leaving the crowd group, by approaching them in a friendly and safe manner. At the core of our approach, we propose a "Discrete Time Motion" model, which works to represent human and robot motions, to predict people's movements, so as to plan a route and provide the robots with concrete motion instructions. After, this thesis goes one step forward by developing the "Prediction and Anticipation Model". This model enables us to determine the optimal distribution of robots for preventing people from straying from the formation in specific areas of the map, and thus to facilitate the task of the robots. Furthermore, we locally optimize the work performed by robots and people alike, and thereby yielding a more human-friendly motion. Finally, an autonomous mobile robot capable of interacting to acquire human-assisted learning is introduced. First, we present different robot behaviors to approach a person and successfully engage with him/her. On the basis of this insight, we furnish our robot with a simple visual module for detecting human faces in real-time. We observe that people ascribe different personalities to the robot depending on its different behaviors. Once contact is initiated, people are given the opportunity to assist the robot to improve its visual skills. After this assisted learning stage, the robot is able to detect people by using the enhanced visual methods. Both contributions are extensively and rigorously tested in real environments. As a whole, this thesis demonstrates the need for robots that are able to operate acceptably around people; to behave in accordance with social norms while accompanying and guiding them. Furthermore, this work shows that cooperation amongst a group of robots optimizes the performance of the robots and people as well.El desenvolupament de robots socials capaços d'interactuar amb els éssers humans és un dels principals reptes en el camp de la robòtica. Actualment, els robots comencen a aparèixer en entorns dinàmics, com zones de vianants, universitats o hospitals; per aquest motiu, aquesta interacció ha de realitzar-se de manera natural, progressiva i cordial, ja que la seva utilització pot ser col.laboració, assistència o ajuda a les persones. Per tant, la navegació i la interacció amb els humans, en aquests entorns, són habilitats importants que les futures generacions de robots han de posseir, a més a més, els robots han de ser aptes de cooperar entre ells si fos requerit. El present treball estudia aquests reptes plantejats. S’han desenvolupat un conjunt de tècniques que permeten als robots interectuar de manera natural amb les persones i el seu entorn, mentre que guien o acompanyen als humans en zones urbanes. En aquest sentit, el moviment dels robots s’inspira en la manera com es mouen els humans en les convenvions socials, així com l’espai personal.El primer punt que aquesta tesi comprèn és el desenvolupament d’un nou mètode per a "robots-acompanyants" basat en el nou model estès de forces socials. S’ha evaluat com es mouen les persones i s’han formulat un conjunt de forces socials virtuals que descriuren el comportament del robot en termes de moviments. Aquesta evaluació es basa en el concepte de “proxemics” i assegura que la navegació del robot està socialment acceptada per la persona que està sent acompanyada i per la gent que es troba a l’entorn. Per mitjà d’un estudi social, mostrem que els humans interpreten el comportament del robot d’acord amb les normes socials. Així mateix, un nou sistema per a guiar a persones en zones urbanes amb un conjunt de robots mòbils que cooperen és presentat. El model proposat ofereix diferents avantatges comparat amb treballs anteriors. Primer, es permet a un grup de persones ser guiades en entorns oberts o amb alta densitat d’obstacles; segon, s’utilitzen diferents robots que cooperen; tercer, els robots són capaços de reincorporar a la formació les persones que s’han allunyat del grup anteriorment de manera segura. La base del nostre enfocament es basa en el nou model anomenat “Discrete Time Motion”, el qual representa els movimients dels humans i els robots, prediu el comportament de les persones, i planeja i proporciona una ruta als robots.Posteriorment, aquesta tesi va un pas més enllà amb el desenvolupament del model “Prediction and Anticipation Model”. Aquest model ens permet determinar la distribució òptima de robots per a prevenir que les persones s’allunyin del grup en zones especíıfiques del mapa, i per tant facilitar la tasca dels robots. A més, s’optimitza localment el treball realitzat pels robots i les persones, produint d’aquesta manera un moviment més amigable. Finalment, s’introdueix un robot autònom mòbil capaç d’interactuar amb les persones per realitzar un aprenentatge assistit. Incialment, es presenten diferents comportaments del robot per apropar-se a una persona i crear un víıncle amb ell/ella. Basant-nos en aquesta idea, un mòdul visual per a la detecció de cares humanes en temps real va ser proporcionat al robot. Hem observat que les persones atribueixen diferents personalitats al robot en funció dels seus diferents comportaments. Una vegada que el contacte va ser iniciat es va donar l’oportunitat als voluntaris d’ajudar al robot per a millorar les seves habilitats visuals. Després d’aquesta etapa d’aprenentatge assistit, el robot va ser capaç d’identificar a les persones mitjançant l'ús de mètodes visuals.En resum, aquesta tesi presenta i demostra la necessitat de robots que siguin capaços d’operar de forma acceptable amb la gent i que es comportin d’acord amb les normes socials mentres acompanyen o guien a persones. Per altra banda, aquest treball mostra que la coperació entre un grup de robots pot optimitzar el rendiment tant dels robots com dels humans

    Continuous real time POMCP to find-and-follow people by a humanoid service robot

    Get PDF
    Trabajo presentado al 14th IEEE-RAS International Conference on Humanoid Robots: Humanoids 2014 "Humans and Robots Face-to-Face", celebrado en Madrid (España) del 18 al 20 de noviembre de 2014.This study describes and evaluates two new methods for finding and following people in urban settings using a humanoid service robot: the Continuous Real-time POMCP method, and its improved extension called Adaptive Highest Belief Continuous Real-time POMCP follower. They are able to run in real-time, in large continuous environments. These methods make use of the online search algorithm Partially Observable Monte-Carlo Planning (POMCP), which in contrast to other previous approaches, can plan under uncertainty on large state spaces. We compare our new methods with a heuristic person follower and demonstrate that they obtain better results by testing them extensively in both simulated and real-life experiments. More than two hours, over 3 km, of autonomous navigation during real-life experiments have been done with a mobile humanoid robot in urban environments.This work has been partially funded by the DPI2013-42458-P.Peer Reviewe

    Mapping and Semantic Perception for Service Robotics

    Get PDF
    Para realizar una tarea, los robots deben ser capaces de ubicarse en el entorno. Si un robot no sabe dónde se encuentra, es imposible que sea capaz de desplazarse para alcanzar el objetivo de su tarea. La localización y construcción de mapas simultánea, llamado SLAM, es un problema estudiado en la literatura que ofrece una solución a este problema. El objetivo de esta tesis es desarrollar técnicas que permitan a un robot comprender el entorno mediante la incorporación de información semántica. Esta información también proporcionará una mejora en la localización y navegación de las plataformas robóticas. Además, también demostramos cómo un robot con capacidades limitadas puede construir de forma fiable y eficiente los mapas semánticos necesarios para realizar sus tareas cotidianas.El sistema de construcción de mapas presentado tiene las siguientes características: En el lado de la construcción de mapas proponemos la externalización de cálculos costosos a un servidor en nube. Además, proponemos métodos para registrar información semántica relevante con respecto a los mapas geométricos estimados. En cuanto a la reutilización de los mapas construidos, proponemos un método que combina la construcción de mapas con la navegación de un robot para explorar mejor un entorno y disponer de un mapa semántico con los objetos relevantes para una misión determinada.En primer lugar, desarrollamos un algoritmo semántico de SLAM visual que se fusiona los puntos estimados en el mapa, carentes de sentido, con objetos conocidos. Utilizamos un sistema monocular de SLAM basado en un EKF (Filtro Extendido de Kalman) centrado principalmente en la construcción de mapas geométricos compuestos únicamente por puntos o bordes; pero sin ningún significado o contenido semántico asociado. El mapa no anotado se construye utilizando sólo la información extraída de una secuencia de imágenes monoculares. La parte semántica o anotada del mapa -los objetos- se estiman utilizando la información de la secuencia de imágenes y los modelos de objetos precalculados. Como segundo paso, mejoramos el método de SLAM presentado anteriormente mediante el diseño y la implementación de un método distribuido. La optimización de mapas y el almacenamiento se realiza como un servicio en la nube, mientras que el cliente con poca necesidad de computo, se ejecuta en un equipo local ubicado en el robot y realiza el cálculo de la trayectoria de la cámara. Los ordenadores con los que está equipado el robot se liberan de la mayor parte de los cálculos y el único requisito adicional es una conexión a Internet.El siguiente paso es explotar la información semántica que somos capaces de generar para ver cómo mejorar la navegación de un robot. La contribución en esta tesis se centra en la detección 3D y en el diseño e implementación de un sistema de construcción de mapas semántico.A continuación, diseñamos e implementamos un sistema de SLAM visual capaz de funcionar con robustez en entornos poblados debido a que los robots de servicio trabajan en espacios compartidos con personas. El sistema presentado es capaz de enmascarar las zonas de imagen ocupadas por las personas, lo que aumenta la robustez, la reubicación, la precisión y la reutilización del mapa geométrico. Además, calcula la trayectoria completa de cada persona detectada con respecto al mapa global de la escena, independientemente de la ubicación de la cámara cuando la persona fue detectada.Por último, centramos nuestra investigación en aplicaciones de rescate y seguridad. Desplegamos un equipo de robots en entornos que plantean múltiples retos que implican la planificación de tareas, la planificación del movimiento, la localización y construcción de mapas, la navegación segura, la coordinación y las comunicaciones entre todos los robots. La arquitectura propuesta integra todas las funcionalidades mencionadas, asi como varios aspectos de investigación novedosos para lograr una exploración real, como son: localización basada en características semánticas-topológicas, planificación de despliegue en términos de las características semánticas aprendidas y reconocidas, y construcción de mapas.In order to perform a task, robots need to be able to locate themselves in the environment. If a robot does not know where it is, it is impossible for it to move, reach its goal and complete the task. Simultaneous Localization and Mapping, known as SLAM, is a problem extensively studied in the literature for enabling robots to locate themselves in unknown environments. The goal of this thesis is to develop and describe techniques to allow a service robot to understand the environment by incorporating semantic information. This information will also provide an improvement in the localization and navigation of robotic platforms. In addition, we also demonstrate how a simple robot can reliably and efficiently build the semantic maps needed to perform its quotidian tasks. The mapping system as built has the following features. On the map building side we propose the externalization of expensive computations to a cloud server. Additionally, we propose methods to register relevant semantic information with respect to the estimated geometrical maps. Regarding the reuse of the maps built, we propose a method that combines map building with robot navigation to better explore a room in order to obtain a semantic map with the relevant objects for a given mission. Firstly, we develop a semantic Visual SLAM algorithm that merges traditional with known objects in the estimated map. We use a monocular EKF (Extended Kalman Filter) SLAM system that has mainly been focused on producing geometric maps composed simply of points or edges but without any associated meaning or semantic content. The non-annotated map is built using only the information extracted from an image sequence. The semantic or annotated parts of the map –the objects– are estimated using the information in the image sequence and the precomputed object models. As a second step we improve the EKF SLAM presented previously by designing and implementing a visual SLAM system based on a distributed framework. The expensive map optimization and storage is allocated as a service in the Cloud, while a light camera tracking client runs on a local computer. The robot’s onboard computers are freed from most of the computation, the only extra requirement being an internet connection. The next step is to exploit the semantic information that we are able to generate to see how to improve the navigation of a robot. The contribution of this thesis is focused on 3D sensing which we use to design and implement a semantic mapping system. We then design and implement a visual SLAM system able to perform robustly in populated environments due to service robots work in environments where people are present. The system is able to mask the image regions occupied by people out of the rigid SLAM pipeline, which boosts the robustness, the relocation, the accuracy and the reusability of the geometrical map. In addition, it estimates the full trajectory of each detected person with respect to the scene global map, irrespective of the location of the moving camera at the point when the people were imaged. Finally, we focus our research on rescue and security applications. The deployment of a multirobot team in confined environments poses multiple challenges that involve task planning, motion planning, localization and mapping, safe navigation, coordination and communications among all the robots. The architecture integrates, jointly with all the above-mentioned functionalities, several novel features to achieve real exploration: localization based on semantic-topological features, deployment planning in terms of the semantic features learned and recognized, and map building.<br /

    Navigation behavior design and representations for a people aware mobile robot system

    Get PDF
    There are millions of robots in operation around the world today, and almost all of them operate on factory floors in isolation from people. However, it is now becoming clear that robots can provide much more value assisting people in daily tasks in human environments. Perhaps the most fundamental capability for a mobile robot is navigating from one location to another. Advances in mapping and motion planning research in the past decades made indoor navigation a commodity for mobile robots. Yet, questions remain on how the robots should move around humans. This thesis advocates the use of semantic maps and spatial rules of engagement to enable non-expert users to effortlessly interact with and control a mobile robot. A core concept explored in this thesis is the Tour Scenario, where the task is to familiarize a mobile robot to a new environment after it is first shipped and unpacked in a home or office setting. During the tour, the robot follows the user and creates a semantic representation of the environment. The user labels objects, landmarks and locations by performing pointing gestures and using the robot's user interface. The spatial semantic information is meaningful to humans, as it allows providing commands to the robot such as ``bring me a cup from the kitchen table". While the robot is navigating towards the goal, it should not treat nearby humans as obstacles and should move in a socially acceptable manner. Three main navigation behaviors are studied in this work. The first behavior is the point-to-point navigation. The navigation planner presented in this thesis borrows ideas from human-human spatial interactions, and takes into account personal spaces as well as reactions of people who are in close proximity to the trajectory of the robot. The second navigation behavior is person following. After the description of a basic following behavior, a user study on person following for telepresence robots is presented. Additionally, situation awareness for person following is demonstrated, where the robot facilitates tasks by predicting the intent of the user and utilizing the semantic map. The third behavior is person guidance. A tour-guide robot is presented with a particular application for visually impaired users.Ph.D

    Local optimization of cooperative robot movements for guiding and regrouping people in a guiding mission

    Get PDF
    Trabajo presentado al IROS celebrado en Taipei (Taiwan) del 18 al 22 de Octubre de 2010.This article presents a novel approach for optimizing locally the work of cooperative robots and obtaining the minimum displacement of humans in a guiding people mission. Unlike other methods, we consider situations where individuals can move freely and can escape from the formation, moreover they must be regrouped by multiple mobile robots working cooperatively. The problem is addressed by introducing a “Discrete Time Motion” model (DTM) and a new cost function that minimizes the work required by robots for leading and regrouping people. The guiding mission is carried out in urban areas containing multiple obstacles and building constraints. Furthermore, an analysis of forces actuating among robots and humans is presented throughout simulations of different situations of robot and human configurations and behaviors.This research was partially supported by CICYT projects DPI2007-61452 and Ingenio Consolider CSD2007-018, by CSIC project 200850I055 and by IST-045062 of European Community. The first author acknowledges Spanish FPU grant ref. AP2006-00825.Peer reviewe

    Proceedings of the NASA Conference on Space Telerobotics, volume 1

    Get PDF
    The theme of the Conference was man-machine collaboration in space. Topics addressed include: redundant manipulators; man-machine systems; telerobot architecture; remote sensing and planning; navigation; neural networks; fundamental AI research; and reasoning under uncertainty

    Local optimization of cooperative robot movements for guiding and regrouping people in a guiding mission

    No full text
    corecore