213 research outputs found

    Autonomous Robots in Dynamic Indoor Environments: Localization and Person-Following

    Get PDF
    Autonomous social robots have many tasks that they need to address such as localization, mapping, navigation, person following, place recognition, etc. In this thesis we focus on two key components required for the navigation of autonomous robots namely, person following behaviour and localization in dynamic human environments. We propose three novel approaches to address these components; two approaches for person following and one for indoor localization. A convolutional neural networks based approach and an Ada-boost based approach are developed for person following. We demonstrate the results by showing the tracking accuracy over time for this behaviour. For the localization task, we propose a novel approach which can act as a wrapper for traditional visual odometry based approaches to improve the localization accuracy in dynamic human environments. We evaluate this approach by showing how the performance varies with increasing number of dynamic agents present in the scene. This thesis provides qualitative and quantitative evaluations for each of the approaches proposed and show that we perform better than the current approaches

    Real-Time Online Human Tracking with a Stereo Camera for Person-Following Robots

    Get PDF
    Person-Following Robots have been studied for multiple decades now. Recently, person-following robots have relied on various sensors (e.g., radar, infrared, laser, ultrasonic, etc). However, these technologies lack the use of the most reliable information from visible colors (visible light cameras) for high-level perception; therefore, many of them are not stable when the robot is placed under complex environments (e.g., crowded scenes, occlusion, target disappearance, etc.). In this thesis, we are presenting three different approaches to track a human target for person-following robots in challenging situations (e.g., partial and full occlusions, appearance changes, pose changes, illumination changes, or distractor wearing the similar clothes, etc.) with a stereo depth camera. The newest tracker (SiamMDH, a Siamese convolutional neural network based tracker with temporary appearance model) implemented in this work achieves 98.92% accuracy with location error threshold 50 pixels and 92.94% success rate with IoU threshold 0.5 on our extensive person-following dataset

    Human robot interaction in a crowded environment

    No full text
    Human Robot Interaction (HRI) is the primary means of establishing natural and affective communication between humans and robots. HRI enables robots to act in a way similar to humans in order to assist in activities that are considered to be laborious, unsafe, or repetitive. Vision based human robot interaction is a major component of HRI, with which visual information is used to interpret how human interaction takes place. Common tasks of HRI include finding pre-trained static or dynamic gestures in an image, which involves localising different key parts of the human body such as the face and hands. This information is subsequently used to extract different gestures. After the initial detection process, the robot is required to comprehend the underlying meaning of these gestures [3]. Thus far, most gesture recognition systems can only detect gestures and identify a person in relatively static environments. This is not realistic for practical applications as difficulties may arise from people‟s movements and changing illumination conditions. Another issue to consider is that of identifying the commanding person in a crowded scene, which is important for interpreting the navigation commands. To this end, it is necessary to associate the gesture to the correct person and automatic reasoning is required to extract the most probable location of the person who has initiated the gesture. In this thesis, we have proposed a practical framework for addressing the above issues. It attempts to achieve a coarse level understanding about a given environment before engaging in active communication. This includes recognizing human robot interaction, where a person has the intention to communicate with the robot. In this regard, it is necessary to differentiate if people present are engaged with each other or their surrounding environment. The basic task is to detect and reason about the environmental context and different interactions so as to respond accordingly. For example, if individuals are engaged in conversation, the robot should realize it is best not to disturb or, if an individual is receptive to the robot‟s interaction, it may approach the person. Finally, if the user is moving in the environment, it can analyse further to understand if any help can be offered in assisting this user. The method proposed in this thesis combines multiple visual cues in a Bayesian framework to identify people in a scene and determine potential intentions. For improving system performance, contextual feedback is used, which allows the Bayesian network to evolve and adjust itself according to the surrounding environment. The results achieved demonstrate the effectiveness of the technique in dealing with human-robot interaction in a relatively crowded environment [7]

    A Cost-Effective Person-Following System for Assistive Unmanned Vehicles with Deep Learning at the Edge

    Get PDF
    The vital statistics of the last century highlight a sharp increment of the average age of the world population with a consequent growth of the number of older people. Service robotics applications have the potentiality to provide systems and tools to support the autonomous and self-sufficient older adults in their houses in everyday life, thereby avoiding the task of monitoring them with third parties. In this context, we propose a cost-effective modular solution to detect and follow a person in an indoor, domestic environment. We exploited the latest advancements in deep learning optimization techniques, and we compared different neural network accelerators to provide a robust and flexible person-following system at the edge. Our proposed cost-effective and power-efficient solution is fully-integrable with pre-existing navigation stacks and creates the foundations for the development of fully-autonomous and self-contained service robotics applications

    Human Following in Mobile Platforms with Person Re-Identification

    Full text link
    Human following is a crucial feature of human-robot interaction, yet it poses numerous challenges to mobile agents in real-world scenarios. Some major hurdles are that the target person may be in a crowd, obstructed by others, or facing away from the agent. To tackle these challenges, we present a novel person re-identification module composed of three parts: a 360-degree visual registration, a neural-based person re-identification using human faces and torsos, and a motion tracker that records and predicts the target person's future position. Our human-following system also addresses other challenges, including identifying fast-moving targets with low latency, searching for targets that move out of the camera's sight, collision avoidance, and adaptively choosing different following mechanisms based on the distance between the target person and the mobile agent. Extensive experiments show that our proposed person re-identification module significantly enhances the human-following feature compared to other baseline variants

    Toward Design of a Drip-Stand Patient Follower Robot

    Get PDF
    A person following robot is an application of service robotics that primarily focuses on human-robot interaction, for example, in security and health care. This paper explores some of the design and development challenges of a patient follower robot. Our motivation stemmed from common mobility challenges associated with patients holding on and pulling the medical drip stand. Unlike other designs for person following robots, the proposed design objectives need to preserve as much as patient privacy and operational challenges in the hospital environment. We placed a single camera closer to the ground, which can result in a narrower field of view to preserve patient privacy. Through a unique design of artificial markers placed on various hospital clothing, we have shown how the visual tracking algorithm can determine the spatial location of the patient with respect to the robot. The robot control algorithm is implemented in three parts: (a) patient detection; (b) distance estimation; and (c) trajectory controller. For patient detection, the proposed algorithm utilizes two complementary tools for target detection, namely, template matching and colour histogram comparison. We applied a pinhole camera model for the estimation of distance from the robot to the patient. We proposed a novel movement trajectory planner to maintain the dynamic tipping stability of the robot by adjusting the peak acceleration. The paper further demonstrates the practicality of the proposed design through several experimental case studies

    Geological Object Recognition in Extraterrestrial Environments

    Get PDF
    On July 4 1997, the landing of NASA’s Pathnder probe and its rover Sojourner marked the beginning of a new era in space exploration; robots with the ability to move have made up the vanguard of human extraterrestrial exploration ever since. With Sojourners landing, for the rst time, a ground traversing robot was at a distance too far from earth to make direct human control practical. This has given rise to the development of autonomous systems to improve the e?ciency of these robots,in both their ability to move,and their ability to make decisions regarding their environment. Computer Vision comprises a large part of these autonomous systems, and in the course of performing these tasks a large number of images are taken for the purpose of navigation. The limited nature of the current Deep Space Network means that a majority of these images are never seen by human eyes. This work explores the possibility of using these images to target certain features by using a combination of three AdaBoost algorithms and established image feature approaches to help prioritize interesting subjects from an ever growing data set of imaging data

    A Hardware Intelligent Processing Accelerator for Domestic Service Robots

    Get PDF
    We present a method for implementing hardware intelligent processing accelerator on domestic service robots. These domestic service robots support human life; therefore, they are required to recognize environments using intelligent processing. Moreover, the intelligent processing requires large computational resources. Therefore, standard personal computers (PCs) with robot middleware on the robots do not have enough resources for this intelligent processing. We propose a ‘connective object for middleware to an accelerator (COMTA),’ which is a system that integrates hardware intelligent processing accelerators and robot middleware. Herein, by constructing dedicated architecture digital circuits, field-programmable gate arrays (FPGAs) accelerate intelligent processing. In addition, the system can configure and access applications on hardware accelerators via a robot middleware space; consequently, robotic engineers do not require the knowledge of FPGAs. We conducted an experiment on the proposed system by utilizing a human-following application with image processing, which is commonly applied in the robots. Experimental results demonstrated that the proposed system can be automatically constructed from a single-configuration file on the robot middleware and can execute the application 5.2 times more efficiently than an ordinary PC
    • …
    corecore