47 research outputs found

    A multisensor based approach using supervised learning and particle filtering for people detection and tracking

    Get PDF
    People detection and tracking is an interesting skill for interactive social robots. Laser range finder (LRF) and vision based approaches are the most common although both present strengths and weaknesses. In this paper, a multisensor system to detect and track people in the proximity of a mobile robot is proposed. First, a supervised learning approach is used to recognize patterns of legs in the proximity of the robot using a LRF. After this, a tracking algorithm is developed using particle filter and the observation model of legs. Second, a Kinect sensor is used to carry out people detection and tracking. This second method uses a face detector in the color image, the color of the clothes and the depth information. The strengths and weaknesses of the second proposal are also commented. In order to put together the strengths of both sensors, a third algorithm is proposed. In this third approach both laser and Kinect data are fused to detect and track people. Finally, the multisensory approach is experimentally evaluated in a real indoor environment. The multisensor system outperforms the single sensor based approaches.This work have been partially supported by the Spanish Government project TIN2012-38969

    Decentralized Sensor Fusion for Ubiquitous Networking Robotics in Urban Areas

    Get PDF
    In this article we explain the architecture for the environment and sensors that has been built for the European project URUS (Ubiquitous Networking Robotics in Urban Sites), a project whose objective is to develop an adaptable network robot architecture for cooperation between network robots and human beings and/or the environment in urban areas. The project goal is to deploy a team of robots in an urban area to give a set of services to a user community. This paper addresses the sensor architecture devised for URUS and the type of robots and sensors used, including environment sensors and sensors onboard the robots. Furthermore, we also explain how sensor fusion takes place to achieve urban outdoor execution of robotic services. Finally some results of the project related to the sensor network are highlighted

    Human-Robot Interaction Strategies for Walker-Assisted Locomotion

    Get PDF
    Neurological and age-related diseases affect human mobility at different levels causing partial or total loss of such faculty. There is a significant need to improve safe and efficient ambulation of patients with gait impairments. In this context, walkers present important benefits for human mobility, improving balance and reducing the load on their lower limbs. Most importantly, walkers induce the use of patients residual mobility capacities in different environments. In the field of robotic technologies for gait assistance, a new category of walkers has emerged, integrating robotic technology, electronics and mechanics. Such devices are known as robotic walkers, intelligent walkers or smart walkers One of the specific and important common aspects to the field of assistive technologies and rehabilitation robotics is the intrinsic interaction between the human and the robot. In this thesis, the concept of Human-Robot Interaction (HRI) for human locomotion assistance is explored. This interaction is composed of two interdependent components. On the one hand, the key role of a robot in a Physical HRI (pHRI) is the generation of supplementary forces to empower the human locomotion. This involves a net flux of power between both actors. On the other hand, one of the crucial roles of a Cognitive HRI (cHRI) is to make the human aware of the possibilities of the robot while allowing him to maintain control of the robot at all times. This doctoral thesis presents a new multimodal human-robot interface for testing and validating control strategies applied to a robotic walkers for assisting human mobility and gait rehabilitation. This interface extracts navigation intentions from a novel sensor fusion method that combines: (i) a Laser Range Finder (LRF) sensor to estimate the users legs kinematics, (ii) wearable Inertial Measurement Unit (IMU) sensors to capture the human and robot orientations and (iii) force sensors measure the physical interaction between the humans upper limbs and the robotic walker. Two close control loops were developed to naturally adapt the walker position and to perform body weight support strategies. First, a force interaction controller generates velocity outputs to the walker based on the upper-limbs physical interaction. Second, a inverse kinematic controller keeps the walker within a desired position to the human improving such interaction. The proposed control strategies are suitable for natural human-robot interaction as shown during the experimental validation. Moreover, methods for sensor fusion to estimate the control inputs were presented and validated. In the experimental studies, the parameters estimation was precise and unbiased. It also showed repeatability when speed changes and continuous turns were performed

    Non-parametric data optimization for 2D laser based people tracking

    Full text link
    © 2017 IEEE. Generally, a model on describing human motion patterns should have an ability to enhance tracking performance particularly when dealing with long term occlusions. These patterns can be efficiently learned by applying Gaussian Processes (GPs). However, the GPs can become computationally expensive with increasing training data with time. Thus, with the proposed data selection and management using Mutual Information (MI) and Mahalanobis Distance (MD)approach, we have be able to keep the necessary portion of informative data and discard the others. This approach is then experimented by using the measurements of horizontal 2D scan of public area of our research centre with a stationary laser range finder. Experimental results show that even 90% reduction of data did not contribute to significantly increased Root Mean Square Error (RMSE). Implementation of Gaussian Process - Particle filter tracker for people tracking with long term occlusions produces a remarkable tracking performance when compared to Extended Kalman Filter (EKF) tracker

    A Cost-Effective Person-Following System for Assistive Unmanned Vehicles with Deep Learning at the Edge

    Get PDF
    The vital statistics of the last century highlight a sharp increment of the average age of the world population with a consequent growth of the number of older people. Service robotics applications have the potentiality to provide systems and tools to support the autonomous and self-sufficient older adults in their houses in everyday life, thereby avoiding the task of monitoring them with third parties. In this context, we propose a cost-effective modular solution to detect and follow a person in an indoor, domestic environment. We exploited the latest advancements in deep learning optimization techniques, and we compared different neural network accelerators to provide a robust and flexible person-following system at the edge. Our proposed cost-effective and power-efficient solution is fully-integrable with pre-existing navigation stacks and creates the foundations for the development of fully-autonomous and self-contained service robotics applications

    Supervisory Autonomous Control of Homogeneous Teams of Unmanned Ground Vehicles, with Application to the Multi-Autonomous Ground-Robotic International Challenge

    Get PDF
    There are many different proposed methods for Supervisory Control of semi-autonomous robots. There have also been numerous software simulations to determine how many robots can be successfully supervised by a single operator, a problem known as fan-out, but only a few studies have been conducted using actual robots. As evidenced by the MAGIC 2010 competition, there is increasing interest in amplifying human capacity by allowing one or a few operators to supervise a team of robotic agents. This interest provides motivation to perform a more in-depth evaluation of many autonomous/semiautonomous robots an operator can successfully supervise. The MAGIC competition allowed two human operators to supervise a team of robots in a complex search-and mapping operation. The MAGIC competition provided the best opportunity to date to study through practice the actual fan-out with multiple semi-autonomous robots. The current research provides a step forward in determining fan-out by offering an initial framework for testing multi-robot teams under supervisory control. One conclusion of this research is that the proposed framework is not complex or complete enough to provide conclusive data for determining fan-out. Initial testing using operators with limited training suggests that there is no obvious pattern to the operator interaction time with robots based on the number of robots and the complexity of the tasks. The initial hypothesis that, for a given task and robot there exists an optimal robot-to-operator efficiency ratio, could not be confirmed. Rather, the data suggests that the ability of the operator is a dominant factor in studies involving operators with limited training supervising small teams of robots. It is possible that, with more extensive training, operator times would become more closely related to the number of agents and the complexity of the tasks. The work described in this thesis proves an experimental framework and a preliminary data set for other researchers to critique and build upon. As the demand increases for agent-to-operator ratios greater than one, the need to expand upon research in this area will continue to grow

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available

    Behavioural strategy for indoor mobile robot navigation in dynamic environments

    Get PDF
    PhD ThesisDevelopment of behavioural strategies for indoor mobile navigation has become a challenging and practical issue in a cluttered indoor environment, such as a hospital or factory, where there are many static and moving objects, including humans and other robots, all of which trying to complete their own specific tasks; some objects may be moving in a similar direction to the robot, whereas others may be moving in the opposite direction. The key requirement for any mobile robot is to avoid colliding with any object which may prevent it from reaching its goal, or as a consequence bring harm to any individual within its workspace. This challenge is further complicated by unobserved objects suddenly appearing in the robots path, particularly when the robot crosses a corridor or an open doorway. Therefore the mobile robot must be able to anticipate such scenarios and manoeuvre quickly to avoid collisions. In this project, a hybrid control architecture has been designed to navigate within dynamic environments. The control system includes three levels namely: deliberative, intermediate and reactive, which work together to achieve short, fast and safe navigation. The deliberative level creates a short and safe path from the current position of the mobile robot to its goal using the wavefront algorithm, estimates the current location of the mobile robot, and extracts the region from which unobserved objects may appear. The intermediate level links the deliberative level and the reactive level, that includes several behaviours for implementing the global path in such a way to avoid any collision. In avoiding dynamic obstacles, the controller has to identify and extract obstacles from the sensor data, estimate their speeds, and then regular its speed and direction to minimize the collision risk and maximize the speed to the goal. The velocity obstacle approach (VO) is considered an easy and simple method for avoiding dynamic obstacles, whilst the collision cone principle is used to detect the collision situation between two circular-shaped objects. However the VO approach has two challenges when applied in indoor environments. The first challenge is extraction of collision cones of non-circular objects from sensor data, in which applying fitting circle methods generally produces large and inaccurate collision cones especially for line-shaped obstacle such as walls. The second challenge is that the mobile robot cannot sometimes move to its goal because all its velocities to the goal are located within collision cones. In this project, a method has been demonstrated to extract the colliii sion cones of circular and non-circular objects using a laser sensor, where the obstacle size and the collision time are considered to weigh the robot velocities. In addition the principle of the virtual obstacle was proposed to minimize the collision risk with unobserved moving obstacles. The simulation and experiments using the proposed control system on a Pioneer mobile robot showed that the mobile robot can successfully avoid static and dynamic obstacles. Furthermore the mobile robot was able to reach its target within an indoor environment without causing any collision or missing the target

    Active Training and Assistance Device for an Individually Adaptable Strength and Coordination Training

    Get PDF
    Das Altern der Weltbevölkerung, insbesondere in der westlichen Welt, stellt die Menschheit vor eine große Herausforderung. Zu erwarten sind erhebliche Auswirkungen auf den Gesundheitssektor, der im Hinblick auf eine steigende Anzahl von Menschen mit altersbedingtem körperlichem und kognitivem Abbau und dem damit erhöhten BedĂŒrfnis einer individuellen Versorgung vor einer großen Aufgabe steht. Insbesondere im letzten Jahrhundert wurden viele wissenschaftliche Anstrengungen unternommen, um Ursache und Entwicklung altersbedingter Erkrankungen, ihr Voranschreiten und mögliche Behandlungen, zu verstehen. Die derzeitigen Modelle zeigen, dass der entscheidende Faktor fĂŒr die Entwicklung solcher Krankheiten der Mangel an sensorischen und motorischen EinflĂŒssen ist, diese wiederum sind das Ergebnis verringerter MobilitĂ€t und immer weniger neuer Erfahrungen. Eine Vielzahl von Studien zeigt, dass erhöhte körperliche AktivitĂ€t einen positiven Effekt auf den Allgemeinzustand von Ă€lteren Erwachsenen mit leichten kognitiven BeeintrĂ€chtigungen und den Menschen in deren unmittelbarer Umgebung hat. Diese Arbeit zielt darauf ab, Ă€lteren Menschen die Möglichkeit zu bieten, eigenstĂ€ndig und sicher ein individuelles körperliches Training zu absolvieren. In den letzten zwei Jahrzehnten hat die Forschung im Bereich der robotischen Bewegungsassistenten, auch Smarte Rollatoren genannt, den Fokus auf die sensorische und kognitive UnterstĂŒtzung fĂŒr Ă€ltere und eingeschrĂ€nkte Personen gesetzt. Durch zahlreiche BemĂŒhungen entstand eine Vielzahl von AnsĂ€tzen zur Mensch-Rollator-Interaktion, alle mit dem Ziel, Bewegung und Navigation innerhalb der Umgebung zu unterstĂŒtzen. Aber trotz allem sind Trainingsmöglichkeiten zur motorischen Aktivierung mittels Smarter Rollatoren noch nicht erforscht. Im Gegensatz zu manchen Smarten Rollatoren, die den Fokus auf Rehabilitationsmöglichkeiten fĂŒr eine bereits fortgeschrittene Krankheit setzen, zielt diese Arbeit darauf ab, kognitive BeeintrĂ€chtigungen in einem frĂŒhen Stadium soweit wie möglich zu verlangsamen, damit die körperliche und mentale Fitness des Nutzers so lang wie möglich aufrechterhalten bleibt. Um die Idee eines solchen Trainings zu ĂŒberprĂŒfen, wurde ein Prototyp-GerĂ€t namens RoboTrainer-Prototyp entworfen, eine mobile Roboter-Plattform, die mit einem zusĂ€tzlichen Kraft-Momente-Sensor und einem Fahrradlenker als Eingabe-Schnittstelle ausgestattet wurde. Das Training beinhaltet vordefinierte Trainingspfade mit Markierungen am Boden, entlang derer der Nutzer das GerĂ€t navigieren soll. Der Prototyp benutzt eine Admittanzgleichung, um seine Geschwindigkeit anhand der Eingabe des Nutzers zu berechnen. Desweiteren leitet das GerĂ€t gezielte Regelungsaktionen bzw. VerhaltensĂ€nderungen des Roboters ein, um das Training herausfordernd zu gestalten. Die Pilotstudie, die mit zehn Ă€lteren Erwachsenen mit beginnender Demenz durchgefĂŒhrt wurde, zeigte eine signifikante Steigerung ihrer InteraktionsfĂ€higkeit mit diesem GerĂ€t. Sie bewies ebenfalls den Nutzen von Regelungsaktionen, um die KomplexitĂ€t des Trainings stĂ€ndig neu anzupassen. Obwohl diese Studie die DurchfĂŒhrbarkeit des Trainings zeigte, waren GrundflĂ€che und mechanische StabilitĂ€t des RoboTrainer-Prototyps suboptimal. Deswegen fokussiert sich der zweite Teil dieser Arbeit darauf, ein neues GerĂ€t zu entwerfen, um die Nachteile des Prototyps zu beheben. Neben einer erhöhten mechanischen StabilitĂ€t, ermöglicht der RoboTrainer v2 eine Anpassung seiner GrundflĂ€che. Dieses spezifische Merkmal der Smarten Rollatoren dient vor allem dazu, die UnterstĂŒtzungsflĂ€che fĂŒr den Benutzer anzupassen. Das ermöglicht einerseits ein agiles Training mit gesunden Personen und andererseits Rehabilitations-Szenarien bei Menschen, die körperliche UnterstĂŒtzung benötigen. Der Regelungsansatz fĂŒr den RoboTrainer v2 erweitert den Admittanzregler des Prototypen durch drei adaptive Strategien. Die erste ist die Anpassung der SensitivitĂ€t an die Eingabe des Nutzers, abhĂ€ngig von der StabilitĂ€t des Nutzer-Rollater-Systems, welche Schwankungen verhindert, die dann passieren können, wenn die HĂ€nde des Nutzers versteifen. Die zweite Anpassung beinhaltet eine neuartige nicht-lineare, geschwindigkeits-basierende Änderung der Admittanz-Parameter, um die Wendigkeit des Rollators zu erhöhen. Die dritte Anpassung erfolgt vor dem eigentlichen Training in einem Parametrierungsprozess, wo nutzereigene InteraktionskrĂ€fte gemessen werden, um individuelle Reglerkonstanten fein abzustimmen und zu berechnen. Die Regelungsaktionen sind VerhaltensĂ€nderungen des GerĂ€tes, die als Bausteine fĂŒr unterstĂŒtzende und herausfordernde Trainingseinheiten mit dem RoboTrainer dienen. Sie nutzen das virtuelle Kraft-Feld-Konzept, um die Bewegung des GerĂ€tes in der Trainingsumgebung zu beeinflussen. Die Bewegung des RoboTrainers wird in der Gesamtumgebung durch globale oder, in bestimmten Teilbereichen, durch rĂ€umliche Aktionen beeinflusst. Die Regelungsaktionen erhalten die Absicht des Nutzers aufrecht, in dem sie eine unabhĂ€ngige Admittanzdynamik implementieren, um deren Einfluss auf die Geschwindigkeit des RoboTrainers zu berechnen. Dies ermöglicht die entscheidende Trennung von ReglerzustĂ€nden, um wĂ€hrend des Trainings passive und sichere Interaktionen mit dem GerĂ€t zu erreichen. Die oben genannten BeitrĂ€ge wurden getrennt ausgewertet und in zwei Studien mit jeweils 22 bzw. 13 jungen, gesunden Erwachsenen untersucht. Diese Studien ermöglichen einen umfassenden Einblick in die ZusammenhĂ€nge zwischen unterschiedlichen FunktionalitĂ€ten und deren Einfluss auf die Nutzer. Sie bestĂ€tigen den gesamten Ansatz, sowie die gemachten Vermutungen im Hinblick auf die Gestaltung einzelner Teile dieser Arbeit. Die Einzelergebnisse dieser Arbeit resultieren in einem neuartigen ForschungsgerĂ€t fĂŒr physische Mensch-Roboter-Interaktionen wĂ€hrend des Trainings mit Erwachsenen. ZukĂŒnftige Forschungen mit dem RoboTrainer ebnen den Weg fĂŒr Smarte Rollatoren als Hilfe fĂŒr die Gesellschaft im Hinblick auf den bevorstehenden demographischen Wandel

    Secure indoor navigation and operation of mobile robots

    Get PDF
    In future work environments, robots will navigate and work side by side to humans. This raises big challenges related to the safety of these robots. In this Dissertation, three tasks have been realized: 1) implementing a localization and navigation system based on StarGazer sensor and Kalman filter; 2) realizing a human-robot interaction system using Kinect sensor and BPNN and SVM models to define the gestures and 3) a new collision avoidance system is realized. The system works on generating the collision-free paths based on the interaction between the human and the robot.In zukĂŒnftigen Arbeitsumgebungen werden Roboter navigieren nebeneinander an Menschen. Das wirft Herausforderungen im Zusammenhang mit der Sicherheit dieser Roboter auf. In dieser Dissertation drei Aufgaben realisiert: 1. Implementierung eines Lokalisierungs und Navigationssystem basierend auf Kalman Filter: 2. Realisierung eines Mensch-Roboter-Interaktionssystem mit Kinect und AI zur Definition der Gesten und 3. ein neues Kollisionsvermeidungssystem wird realisiert. Das System arbeitet an der Erzeugung der kollisionsfreien Pfade, die auf der Wechselwirkung zwischen dem Menschen und dem Roboter basieren
    corecore