4,273 research outputs found

    COACHES Cooperative Autonomous Robots in Complex and Human Populated Environments

    Get PDF
    Public spaces in large cities are increasingly becoming complex and unwelcoming environments. Public spaces progressively become more hostile and unpleasant to use because of the overcrowding and complex information in signboards. It is in the interest of cities to make their public spaces easier to use, friendlier to visitors and safer to increasing elderly population and to citizens with disabilities. Meanwhile, we observe, in the last decade a tremendous progress in the development of robots in dynamic, complex and uncertain environments. The new challenge for the near future is to deploy a network of robots in public spaces to accomplish services that can help humans. Inspired by the aforementioned challenges, COACHES project addresses fundamental issues related to the design of a robust system of self-directed autonomous robots with high-level skills of environment modelling and scene understanding, distributed autonomous decision-making, short-term interacting with humans and robust and safe navigation in overcrowding spaces. To this end, COACHES will provide an integrated solution to new challenges on: (1) a knowledge-based representation of the environment, (2) human activities and needs estimation using Markov and Bayesian techniques, (3) distributed decision-making under uncertainty to collectively plan activities of assistance, guidance and delivery tasks using Decentralized Partially Observable Markov Decision Processes with efficient algorithms to improve their scalability and (4) a multi-modal and short-term human-robot interaction to exchange information and requests. COACHES project will provide a modular architecture to be integrated in real robots. We deploy COACHES at Caen city in a mall called “Rive de l’orne”. COACHES is a cooperative system consisting of ?xed cameras and the mobile robots. The ?xed cameras can do object detection, tracking and abnormal events detection (objects or behaviour). The robots combine these information with the ones perceived via their own sensor, to provide information through its multi-modal interface, guide people to their destinations, show tramway stations and transport goods for elderly people, etc.... The COACHES robots will use different modalities (speech and displayed information) to interact with the mall visitors, shopkeepers and mall managers. The project has enlisted an important an end-user (Caen la mer) providing the scenarios where the COACHES robots and systems will be deployed, and gather together universities with complementary competences from cognitive systems (SU), robust image/video processing (VUB, UNICAEN), and semantic scene analysis and understanding (VUB), Collective decision-making using decentralized partially observable Markov Decision Processes and multi-agent planning (UNICAEN, Sapienza), multi-modal and short-term human-robot interaction (Sapienza, UNICAEN

    GUI3DXBot: Una herramienta software interactiva para un robot móvil guía

    Get PDF
    Nowadays, mobile robots begin to appear in public places. To do these tasks properly, mobile robots must interact with humans. This paper presents the development of GUI3DXBot, a software tool for a tour-guide mobile robot. The paper focuses on the development of different software modules needed to guide users in an office building. In this context, GUI3DXBot is a server-client application, where the server side runs into the robot, and the client side runs into a 10-inch Android tablet. The GUI3DXBot server side is in charge of performing the perception, localization-mapping, and path planning tasks. The GUI3DXBot client side implements the human-robot interface that allows users requesting-canceling a tour-guide service, showing robot localization in the map, interacting with users, and tele-operating the robot in case of emergency. The contributions of this paper are twofold: it proposes a software modules design to guide users in an office building, and the whole robot system was well integrated and fully tested. GUI3DXBot were tested using software integration and field tests. The field tests were performed over a two-week period, and a survey to users was conducted. The survey results show that users think GUI3DXBot is friendly and intuitive, the goal selection was very easy, the interactive messages were very easy to understand, 90% of users found useful the robot icon on the map, users found useful drawing the path on the map, 90% of users found useful the local-global map view, and the guidance experience was very satisfactory (70%) and satisfactory (30%).Actualmente, los robots móviles inician a aparecer en lugares públicos. Para realizar estas tareas adecuadamente, los robots móviles deben interactuar con humanos. Este artículo presenta GUI3DXBot, un aplicativo para un robot móvil guía. Este artículo se enfoca en el desarrollo de los diferentes módulos software necesarios para guiar a usuarios en un edificio de oficinas. GUI3DXBot es una aplicación cliente-servidor, donde el lado del servidor se ejecuta en el robot, y el lado del cliente se ejecuta en una tableta de 10 pulgadas Android. El lado servidor de GUI3DXBot está a cargo de la percepción, localización-mapeo y planificación de rutas. El lado cliente de GUI3DXBot implementa la interfaz humano-robot que permite a los usuarios solicitar-cancelar un servicio de guía, mostrar la localización del robot en el mapa, interactuar con los usuarios, y tele-operar el robot en caso de emergencia. Las contribuciones de este artículo son dos: se propone un diseño de módulos software para guiar a usuarios en un edificio de oficinas, y que todo el sistema robótico está bien integrado y completamente probado. GUI3DXBot fue validada usando pruebas de integración y de campo. Las pruebas de campo fueron realizadas en un periodo de 2 semanas, y una encuesta a los usuarios fue llevada a cabo. Los resultados de la encuesta mostraron que los usuarios piensan que GUI3DXBot es amigable e intuitiva, la selección de metas fue fácil, pudieron entender los mensajes de interacción, 90% de los usuarios encontraron útil el ícono del robot sobre el mapa, encontraron útil dibujar la ruta planeada en el mapa, 90% de los usuarios encontraron útil la vista local-global del mapa, y la experiencia de guía fue muy satisfactoria (70%) y satisfactoria (30%)

    Developing a person guidance module for hospital robots

    Get PDF
    This dissertation describes the design and implementation of the Person Guidance Module (PGM) that enables the IWARD (Intelligent Robot Swarm for attendance, Recognition, Cleaning and delivery) base robot to offer route guidance service to the patients or visitors inside the hospital arena. One of the common problems encountered in huge hospital buildings today is foreigners not being able to find their way around in the hospital. Although there are a variety of guide robots currently existing on the market and offering a wide range of guidance and related activities, they do not fit into the modular concept of the IWARD project. The PGM features a robust and foolproof non-hierarchical sensor fusion approach of an active RFID, stereovision and cricket mote sensor for guiding a patient to the X-ray room, or a visitor to a patient’s ward in every possible scenario in a complex, dynamic and crowded hospital environment. Moreover, the speed of the robot can be adjusted automatically according to the pace of the follower for physical comfort using this system. Furthermore, the module performs these tasks in any unconstructed environment solely from a robot’s onboard perceptual resources in order to limit the hardware installation costs and therefore the indoor setting support. Similar comprehensive solution in one single platform has remained elusive in existing literature. The finished module can be connected to any IWARD base robot using quick-change mechanical connections and standard electrical connections. The PGM module box is equipped with a Gumstix embedded computer for all module computing which is powered up automatically once the module box is inserted into the robot. In line with the general software architecture of the IWARD project, all software modules are developed as Orca2 components and cross-complied for Gumstix’s XScale processor. To support standardized communication between different software components, Internet Communications Engine (Ice) has been used as middleware. Additionally, plug-and-play capabilities have been developed and incorporated so that swarm system is aware at all times of which robot is equipped with PGM. Finally, in several field trials in hospital environments, the person guidance module has shown its suitability for a challenging real-world application as well as the necessary user acceptance

    What do staff in eldercare want a robot for? An assessment of potential tasks and user requirements for a long-term deployment

    Get PDF
    Robotic aids could help to overcome the gap between rising numbers of older adults and at the same time declining numbers of care staff. Assessments of end-user requirements, especially focusing on staff in eldercare facilities are still sparse. Contributing to this field of research this study presents end-user requirements and task analysis gained from a methodological combination of interviews and focus group discussions. The findings suggest different tasks robots in eldercare could engage in such as “fetch and carry” tasks, specific entertainment and information tasks, support in physical and occupational therapy, and in security. Furthermore this paper presents an iterative approach that closes the loop between requirements-assessments and subsequent implementations that follow the found requirements

    Heterogeneous context-aware robots providing a personalized building tour regular paper

    Get PDF
    Existing robot guides offer a tour of a building, such as a museum or science centre, to one or more visitors. Usually the tours are predefined and lack support for dynamic interactions between the different robots. This paper focuses on the distributed collaboration of multiple heterogeneous robots (receptionist, companion) guiding visitors through a building. Semantic techniques support the formal definition of tour topics, the available content on a specific topic, and the robot and person profiles including interests and acquired knowledge. The robot guides select topics depending on their participants' interests and prior knowledge. Whenever one guide moves into the proximity of another, the guides automatically exchange participants, optimizing the amount of interesting topics. Robot collaboration is realized through the development of a software module that allows a robot to transparently include behaviours performed by other robots into its own set of behaviours. The multi-robot visitor guide application is integrated into an extended distributed heterogeneous robot team, using a receptionist robot that was not originally designed to cooperate with the guides. Evaluation of the implemented algorithms presents a 90% content coverage of relevant topics for the participants

    Personal Guides: Heterogeneous Robots Sharing Personal Tours in Multi-Floor Environments

    Get PDF
    GidaBot is an application designed to setup and run a heterogeneous team of robots to act as tour guides in multi-floor buildings. Although the tours can go through several floors, the robots can only service a single floor, and thus, a guiding task may require collaboration among several robots. The designed system makes use of a robust inter-robot communication strategy to share goals and paths during the guiding tasks. Such tours work as personal services carried out by one or more robots. In this paper, a face re-identification/verification module based on state-of-the-art techniques is developed, evaluated offline, and integrated into GidaBot’s real daily activities, to avoid new visitors interfering with those attended. It is a complex problem because, as users are casual visitors, no long-term information is stored, and consequently, faces are unknown in the training step. Initially, re-identification and verification are evaluated offline considering different face detectors and computing distances in a face embedding representation. To fulfil the goal online, several face detectors are fused in parallel to avoid face alignment bias produced by face detectors under certain circumstances, and the decision is made based on a minimum distance criterion. This fused approach outperforms any individual method and highly improves the real system’s reliability, as the tests carried out using real robots at the Faculty of Informatics in San Sebastian show.This work has been partially funded by the Basque Government, Spain, grant number IT900-16, and the Spanish Ministry of Economy and Competitiveness (MINECO), grant number RTI2018-093337-B-I00

    A Networking Framework for Multi-Robot Coordination

    Get PDF
    Autonomous robots operating in real environments need to be able to interact with a dynamic world populated with objects, people, and, in general, other agents. The current generation of autonomous robots, such as the ASIMO robot by Honda or the QRIO by Sony, has showed impressive performances in mechanics and control of movements; moreover, recent literature reports encouraging results about the capability of such robots of representing themselves with respect to a dynamic external world, of planning future actions and of evaluating resulting situations in order to make new plans. However, when multiple robots are supposed to operate together, coordination and communication issues arise; while noteworthy results have been achieved with respect to the control of a single robot, novel issues arise when the actions of a robot influence another''s behavior. The increase in computational power available to systems nowadays makes it feasible, and even convenient, to organize them into a single distributed computing environment in order to exploit the synergy among different entities. This is especially true for robot teams, where cooperation is supposed to be the most natural scheme of operation, especially when robots are required to operate in highly constrained scenarios, such as inhospitable sites, remote sites, or indoor environments where strict constraints on intrusiveness must be respected. In this case, computations will be inherently network-centric, and to solve the need for communication inside robot collectives, an efficient network infrastructure must be put into place; once a proper communication channel is established, multiple robots may benefit from the interaction with each other in order to achieve a common goal. The framework presented in this paper adopts a composite networking architecture, in which a hybrid wireless network, composed by commonly available WiFi devices, and the more recently developed wireless sensor networks, operates as a whole in order both to provide a communication backbone for the robots and to extract useful information from the environment. The ad-hoc WiFi backbone allows robots to exchange coordination information among themselves, while also carrying data measurements collected from surrounding environment, and useful for localization or mere data gathering purposes. The proposed framework is called RoboNet, and extends a previously developed robotic tour guide application (Chella et al., 2007) in the context of a multi-robot application; our system allows a team of robots to enhance their perceptive capabilities through coordination obtained via a hybrid communication network; moreover, the same infrastructure allows robots to exchange information so as to coordinate their actions in order to achieve a global common goal. The working scenario considered in this paper consists of a museum setting, where guided tours are to be automatically managed. The museum is arranged both chronologically and topographically, but the sequence of findings to be visited can be rearranged depending on user queries, making a sort of dynamic virtual labyrinth with various itineraries. Therefore, the robots are able to guide visitors both in prearranged tours and in interactive tours, built in itinere depending on the interaction with the visitor: robots are able to rebuild the virtual connection between findings and, consequently, the path to be followed. This paper is organized as follows. Section 2 contains some background on multi-robot coordination, and Section 3 describes the underlying ideas and the motivation behind the proposed architecture, whose details are presented in Sections 4, 5, and 6. A realistic application scenario is described in Section 7, and finally our conclusions are drawn in Section 8

    Human-Like Guide Robot that Proactively Explains Exhibits

    Get PDF
    We developed an autonomous human-like guide robot for a science museum. Its identifies individuals, estimates the exhibits at which visitors are looking, and proactively approaches them to provide explanations with gaze autonomously, using our new approach called speak-and-retreat interaction. The robot also performs such relation-building behaviors as greeting visitors by their names and expressing a friendlier attitude to repeat visitors. We conducted a field study in a science museum at which our system basically operated autonomously and the visitors responded quite positively. First-time visitors on average interacted with the robot for about 9 min, and 94.74% expressed a desire to interact with it again in the future. Repeat visitors noticed its relation-building capability and perceived a closer relationship with it
    corecore