55 research outputs found

    CompaRob: the shopping cart assistance robot

    Get PDF
    Technology has recently been developed which offers an excellent opportunity to design systems with the ability to help people in their own houses. In particular, assisting elderly people in their environments is something that can significantly improve their quality of life. However, helping elderly people outside their usual environment is also necessary, to help them to carry out daily tasks like shopping. In this paper we present a person-following shopping cart assistance robot, capable of helping elderly people to carry products in a supermarket. First of all, the paper presents a survey of related systems that perform this task, using different approaches, such as attachable modules and computer vision. After that, the paper describes in detail the proposed system and its main features. The cart uses ultrasonic sensors and radio signals to provide a simple and effective person localization and following method. Moreover, the cart can be connected to a portable device like a smartphone or tablet, thus providing ease of use to the end user. The prototype has been tested in a grocery store, while simulations have been done to analyse its scalability in larger spaces where multiple robots could coexist.This work was partly supported by Spanish Ministry under Grant DPI2014-57746-C3 (MERBOTS Project) and by Universitat Jaume I Grants P1-1B2015-68 and PID2010-12

    Robust perception of humans for mobile robots RGB-depth algorithms for people tracking, re-identification and action recognition

    Get PDF
    Human perception is one of the most important skills for a mobile robot sharing its workspace with humans. This is not only true for navigation, because people have to be avoided differently than other obstacles, but also because mobile robots must be able to truly interact with humans. In a near future, we can imagine that robots will be more and more present in every house and will perform services useful to the well-being of humans. For this purpose, robust people tracking algorithms must be exploited and person re-identification techniques play an important role for allowing robots to recognize a person after a full occlusion or after long periods of time. Moreover, they must be able to recognize what humans are doing, in order to react accordingly, helping them if needed or also learning from them. This thesis tackles these problems by proposing approaches which combine algorithms based on both RGB and depth information which can be obtained with recently introduced consumer RGB-D sensors. Our key contribution to people detection and tracking research is a depth-clustering method which allows to apply a robust image-based people detector only to a small subset of possible detection windows, thus decreasing the number of false detections while reaching high computational efficiency. We also advance person re-identification research by proposing two techniques exploiting depth-based skeletal tracking algorithms: one is targeted to short-term re-identification and creates a compact, yet discrimative signature of people based on computing features at skeleton keypoints, which are highly repeatable and semantically meaningful; the other extract long-term features, such as 3D shape, to compare people by matching the corresponding 3D point cloud acquired with a RGB-D sensor. In order to account for the fact that people are articulated and not rigid objects, it exploits 3D skeleton information for warping people point clouds to a standard pose, thus making them directly comparable by means of least square fitting. Finally, we describe an extension of flow-based action recognition methods to the RGB-D domain which computes motion over time of persons' 3D points by exploiting joint color and depth information and recognizes human actions by classifying gridded descriptors of 3D flow. A further contribution of this thesis is the creation of a number of new RGB-D datasets which allow to compare different algorithms on data acquired by consumer RGB-D sensors. All these datasets have been publically released in order to foster research in these fields

    Development of new intelligent autonomous robotic assistant for hospitals

    Get PDF
    Continuous technological development in modern societies has increased the quality of life and average life-span of people. This imposes an extra burden on the current healthcare infrastructure, which also creates the opportunity for developing new, autonomous, assistive robots to help alleviate this extra workload. The research question explored the extent to which a prototypical robotic platform can be created and how it may be implemented in a hospital environment with the aim to assist the hospital staff with daily tasks, such as guiding patients and visitors, following patients to ensure safety, and making deliveries to and from rooms and workstations. In terms of major contributions, this thesis outlines five domains of the development of an actual robotic assistant prototype. Firstly, a comprehensive schematic design is presented in which mechanical, electrical, motor control and kinematics solutions have been examined in detail. Next, a new method has been proposed for assessing the intrinsic properties of different flooring-types using machine learning to classify mechanical vibrations. Thirdly, the technical challenge of enabling the robot to simultaneously map and localise itself in a dynamic environment has been addressed, whereby leg detection is introduced to ensure that, whilst mapping, the robot is able to distinguish between people and the background. The fourth contribution is geometric collision prediction into stabilised dynamic navigation methods, thus optimising the navigation ability to update real-time path planning in a dynamic environment. Lastly, the problem of detecting gaze at long distances has been addressed by means of a new eye-tracking hardware solution which combines infra-red eye tracking and depth sensing. The research serves both to provide a template for the development of comprehensive mobile assistive-robot solutions, and to address some of the inherent challenges currently present in introducing autonomous assistive robots in hospital environments.Open Acces

    Camera Marker Networks for Pose Estimation and Scene Understanding in Construction Automation and Robotics.

    Full text link
    The construction industry faces challenges that include high workplace injuries and fatalities, stagnant productivity, and skill shortage. Automation and Robotics in Construction (ARC) has been proposed in the literature as a potential solution that makes machinery easier to collaborate with, facilitates better decision-making, or enables autonomous behavior. However, there are two primary technical challenges in ARC: 1) unstructured and featureless environments; and 2) differences between the as-designed and the as-built. It is therefore impossible to directly replicate conventional automation methods adopted in industries such as manufacturing on construction sites. In particular, two fundamental problems, pose estimation and scene understanding, must be addressed to realize the full potential of ARC. This dissertation proposes a pose estimation and scene understanding framework that addresses the identified research gaps by exploiting cameras, markers, and planar structures to mitigate the identified technical challenges. A fast plane extraction algorithm is developed for efficient modeling and understanding of built environments. A marker registration algorithm is designed for robust, accurate, cost-efficient, and rapidly reconfigurable pose estimation in unstructured and featureless environments. Camera marker networks are then established for unified and systematic design, estimation, and uncertainty analysis in larger scale applications. The proposed algorithms' efficiency has been validated through comprehensive experiments. Specifically, the speed, accuracy and robustness of the fast plane extraction and the marker registration have been demonstrated to be superior to existing state-of-the-art algorithms. These algorithms have also been implemented in two groups of ARC applications to demonstrate the proposed framework's effectiveness, wherein the applications themselves have significant social and economic value. The first group is related to in-situ robotic machinery, including an autonomous manipulator for assembling digital architecture designs on construction sites to help improve productivity and quality; and an intelligent guidance and monitoring system for articulated machinery such as excavators to help improve safety. The second group emphasizes human-machine interaction to make ARC more effective, including a mobile Building Information Modeling and way-finding platform with discrete location recognition to increase indoor facility management efficiency; and a 3D scanning and modeling solution for rapid and cost-efficient dimension checking and concise as-built modeling.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113481/1/cforrest_1.pd

    Searching and tracking people with cooperative mobile robots

    Get PDF
    The final publication is available at link.springer.comSocial robots should be able to search and track people in order to help them. In this paper we present two different techniques for coordinated multi-robot teams for searching and tracking people. A probability map (belief) of a target person location is maintained, and to initialize and update it, two methods were implemented and tested: one based on a reinforcement learning algorithm and the other based on a particle filter. The person is tracked if visible, otherwise an exploration is done by making a balance, for each candidate location, between the belief, the distance, and whether close locations are explored by other robots of the team. The validation of the approach was accomplished throughout an extensive set of simulations using up to five agents and a large amount of dynamic obstacles; furthermore, over three hours of real-life experiments with two robots searching and tracking were recorded and analysed.Peer ReviewedPostprint (author's final draft

    Local user mapping via multi-modal fusion for social robots

    Get PDF
    User detection, recognition and tracking is at the heart of Human Robot Interaction, and yet, to date, no universal robust method exists for being aware of the people in a robot surroundings. The presented work aims at importing into existing social robotics platforms different techniques, some of them classical, and other novel, for detecting, recognizing and tracking human users. These algorithms are based on a variety of sensors, mainly cameras and depth imaging devices, but also lasers and microphones. The results of these parallel algorithms are then merged so as to obtain a modular, expandable and fast architecture. This results in a local user mapping thanks to multi-modal fusion. Thanks to this user awareness architecture, user detection, recognition and tracking capabilities can be easily and quickly given to any robot by re-using the modules that match its sensors and its processing performance. The architecture provides all the relevant information about the users around the robot, that can then be used for end-user applications that adapt their behavior to the users around the robot. The variety of social robots in which the architecture has been successfully implemented includes a car-like mobile robot, an articulated flower and a humanoid assistance robot. Some modules of the architecture are very lightweight but have a low reliability, others need more CPU but the associated confidence is higher. All configurations of modules are possible, and fit the range of possible robotics hardware configurations. All the modules are independent and highly configurable, therefore no code needs to be developed for building a new configuration, the user only writes a ROS launch file. This simple text file contains all wanted modules. The architecture has been developed with modularity and speed in mind. It is based on the Robot Operating System (ROS) architecture, a de facto software standard in robotics. The different people detectors comply with a common interface called PeoplePoseList Publisher, while the people recognition algorithms comply with an interface called PeoplePoseList Matcher. The fusion of all these different modules is based on Unscented Kalman Filter techniques. Extensive benchmarks of the sub-components and of the whole architecture, using both academic datasets and data acquired in our lab, and end-user application samples demonstrate the validity and interest of all levels of the architecture.La detección, el reconocimiento y el seguimiento de los usuarios es un problema clave para la Interacción Humano-Robot. Sin embargo, al día de hoy, no existe ningún método robusto universal para para lograr que un robot sea consciente de la gente que le rodea. Esta tesis tiene como objetivo implementar, dentro de robots sociales, varias técnicas, algunas clásicas, otras novedosas, para detectar, reconocer y seguir a los usuarios humanos. Estos algoritmos se basan en sensores muy variados, principalmente cámaras y fuentes de imágenes de profundidad, aunque también en láseres y micrófonos. Los resultados parciales, suministrados por estos algoritmos corriendo en paralelo, luego son mezcladas usando técnicas probabilísticas para obtener una arquitectura modular, extensible y rápida. Esto resulta en un mapa local de los usuarios, obtenido por técnicas de fusión de datos. Gracias a esta arquitectura, las habilidades de detección, reconocimiento y seguimiento de los usuarios podrían ser integradas fácil y rápidamente dentro de un nuevo robot, reusando los módulos que corresponden a sus sensores y el rendimiento de su procesador. La arquitectura suministra todos los datos útiles sobre los usuarios en el alrededor del robot y se puede usar por aplicaciones de más alto nivel en nuestros robots sociales de manera que el robot adapte su funcionamiento a las personas que le rodean. Los robots sociales en los cuales la arquitectura se pudo importar con éxito son: un robot en forma de coche, una flor articulada, y un robot humanoide asistencial. Algunos módulos de la arquitectura son muy ligeros pero con una fiabilidad baja, mientras otros requieren más CPU pero son más fiables. Todas las configuraciones de los módulos son posibles y se ajustan a las diferentes configuraciones hardware que puede tener el robot. Los módulos son independientes entre ellos y altamente configurables, por lo que no hay que desarrollar código para una nueva configuración. El usuario sólo tiene que escribir un fichero launch de ROS. Este sencillo fichero de texto contiene todos los módulos que se quieren lanzar. Esta arquitectura se desarrolló teniendo en mente que fuese modular y rápida. Se basa en la arquitectura Robot Operating System (ROS), un estándar software de facto en la robótica. Todos los detectores de personas tienen una interfaz común llamada PeoplePoseList Publisher, mientras los algoritmos de reconocimiento siguen una interfaz llamada PeoplePoseList Matcher. La fusión de todos estos módulos se basa en técnicas de filtros de Kalman no lineares (Unscented Kalman Filters). Se han realizado pruebas exhaustivas de precisión y de velocidad de cada componente y de la arquitectura completa (realizadas sobre ambos bases de datos académicas además de sobre datos grabados en nuestro laboratorio), así como prototipos sencillos de aplicaciones finales. Así se comprueba la validez y el interés de la arquitectura a todos los niveles.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Fernando Torres Medina.- Secretario: María Dolores Blanco Rojas.- Vocal: Jorge Manuel Miranda Día

    Context-aware design and motion planning for autonomous service robots

    Get PDF

    Adaptive obstacle detection for mobile robots in urban environments using downward-looking 2D LiDAR

    Get PDF
    Environment perception is important for collision-free motion planning of outdoor mobile robots. This paper presents an adaptive obstacle detection method for outdoor mobile robots using a single downward-looking LiDAR sensor. The method begins by extracting line segments from the raw sensor data, and then estimates the height and the vector of the scanned road surface at each moment. Subsequently, the segments are divided into either road ground or obstacles based on the average height of each line segment and the deviation between the line segment and the road vector estimated from the previous measurements. A series of experiments have been conducted in several scenarios, including normal scenes and complex scenes. The experimental results show that the proposed approach can accurately detect obstacles on roads and could effectively deal with the different heights of obstacles in urban road environments

    Safe navigation and human-robot interaction in assistant robotic applications

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen
    corecore