6,169 research outputs found

    Long-term experiments with an adaptive spherical view representation for navigation in changing environments

    Get PDF
    Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In this work, we introduce a method to update the reference views in a hybrid metric-topological map so that a mobile robot can continue to localize itself in a changing environment. The updating mechanism, based on the multi-store model of human memory, incorporates a spherical metric representation of the observed visual features for each node in the map, which enables the robot to estimate its heading and navigate using multi-view geometry, as well as representing the local 3D geometry of the environment. A series of experiments demonstrate the persistence performance of the proposed system in real changing environments, including analysis of the long-term stability

    Control of free-flying space robot manipulator systems

    Get PDF
    New control techniques for self contained, autonomous free flying space robots were developed and tested experimentally. Free flying robots are envisioned as a key element of any successful long term presence in space. These robots must be capable of performing the assembly, maintenance, and inspection, and repair tasks that currently require human extravehicular activity (EVA). A set of research projects were developed and carried out using lab models of satellite robots and a flexible manipulator. The second generation space robot models use air cushion vehicle (ACV) technology to simulate in 2-D the drag free, zero g conditions of space. The current work is divided into 5 major projects: Global Navigation and Control of a Free Floating Robot, Cooperative Manipulation from a Free Flying Robot, Multiple Robot Cooperation, Thrusterless Robotic Locomotion, and Dynamic Payload Manipulation. These projects are examined in detail

    A vision system planner for increasing the autonomy of the Extravehicular Activity Helper/Retriever

    Get PDF
    The Extravehicular Activity Retriever (EVAR) is a robotic device currently being developed by the Automation and Robotics Division at the NASA Johnson Space Center to support activities in the neighborhood of the Space Shuttle or Space Station Freedom. As the name implies, the Retriever's primary function will be to provide the capability to retrieve tools and equipment or other objects which have become detached from the spacecraft, but it will also be able to rescue a crew member who may have become inadvertently de-tethered. Later goals will include cooperative operations between a crew member and the Retriever such as fetching a tool that is required for servicing or maintenance operations. This paper documents a preliminary design for a Vision System Planner (VSP) for the EVAR that is capable of achieving visual objectives provided to it by a high level task planner. Typical commands which the task planner might issue to the VSP relate to object recognition, object location determination, and obstacle detection. Upon receiving a command from the task planner, the VSP then plans a sequence of actions to achieve the specified objective using a model-based reasoning approach. This sequence may involve choosing an appropriate sensor, selecting an algorithm to process the data, reorienting the sensor, adjusting the effective resolution of the image using lens zooming capability, and/or requesting the task planner to reposition the EVAR to obtain a different view of the object. An initial version of the Vision System Planner which realizes the above capabilities using simulated images has been implemented and tested. The remaining sections describe the architecture and capabilities of the VSP and its relationship to the high level task planner. In addition, typical plans that are generated to achieve visual goals for various scenarios are discussed. Specific topics to be addressed will include object search strategies, repositioning of the EVAR to improve the quality of information obtained from the sensors, and complementary usage of the sensors and redundant capabilities

    Autonomous navigation for guide following in crowded indoor environments

    No full text
    The requirements for assisted living are rapidly changing as the number of elderly patients over the age of 60 continues to increase. This rise places a high level of stress on nurse practitioners who must care for more patients than they are capable. As this trend is expected to continue, new technology will be required to help care for patients. Mobile robots present an opportunity to help alleviate the stress on nurse practitioners by monitoring and performing remedial tasks for elderly patients. In order to produce mobile robots with the ability to perform these tasks, however, many challenges must be overcome. The hospital environment requires a high level of safety to prevent patient injury. Any facility that uses mobile robots, therefore, must be able to ensure that no harm will come to patients whilst in a care environment. This requires the robot to build a high level of understanding about the environment and the people with close proximity to the robot. Hitherto, most mobile robots have used vision-based sensors or 2D laser range finders. 3D time-of-flight sensors have recently been introduced and provide dense 3D point clouds of the environment at real-time frame rates. This provides mobile robots with previously unavailable dense information in real-time. I investigate the use of time-of-flight cameras for mobile robot navigation in crowded environments in this thesis. A unified framework to allow the robot to follow a guide through an indoor environment safely and efficiently is presented. Each component of the framework is analyzed in detail, with real-world scenarios illustrating its practical use. Time-of-flight cameras are relatively new sensors and, therefore, have inherent problems that must be overcome to receive consistent and accurate data. I propose a novel and practical probabilistic framework to overcome many of the inherent problems in this thesis. The framework fuses multiple depth maps with color information forming a reliable and consistent view of the world. In order for the robot to interact with the environment, contextual information is required. To this end, I propose a region-growing segmentation algorithm to group points based on surface characteristics, surface normal and surface curvature. The segmentation process creates a distinct set of surfaces, however, only a limited amount of contextual information is available to allow for interaction. Therefore, a novel classifier is proposed using spherical harmonics to differentiate people from all other objects. The added ability to identify people allows the robot to find potential candidates to follow. However, for safe navigation, the robot must continuously track all visible objects to obtain positional and velocity information. A multi-object tracking system is investigated to track visible objects reliably using multiple cues, shape and color. The tracking system allows the robot to react to the dynamic nature of people by building an estimate of the motion flow. This flow provides the robot with the necessary information to determine where and at what speeds it is safe to drive. In addition, a novel search strategy is proposed to allow the robot to recover a guide who has left the field-of-view. To achieve this, a search map is constructed with areas of the environment ranked according to how likely they are to reveal the guide’s true location. Then, the robot can approach the most likely search area to recover the guide. Finally, all components presented are joined to follow a guide through an indoor environment. The results achieved demonstrate the efficacy of the proposed components

    Collaborative Surgical Robots:Optical Tracking During Endovascular Operations

    Get PDF
    Endovascular interventions usually require meticulous handling of surgical instruments and constant monitoring of the operating room workspace. To address these challenges, robotic- assisted technologies and tracking techniques are increasingly being developed. Specifically, the limited workspace and potential for a collision between the robot and surrounding dynamic obstacles are important aspects that need to be considered. This article presents a navigation system developed to assist clinicians with the magnetic actuation of endovascular catheters using multiple surgical robots. We demonstrate the actuation of a magnetic catheter in an experimental arterial testbed with dynamic obstacles. The motions and trajectory planning of two six degrees of freedom (6-DoF) robotic arms are established through passive markerguided motion planning. We achieve an overall 3D tracking accuracy of 2.3 ± 0.6 mm for experiments involving dynamic obstacles. We conclude that integrating multiple optical trackers with the online planning of two serial-link manipulators is useful to support the treatment of endovascular diseases and aid clinicians during interventions

    Research and development of robots cooperation and coordination algorithm for space exploration

    Get PDF
    Las técnicas y métodos descritos en esta tesis están enfocados en resolver el problema de localización entre robots y del movimiento en una formación predeterminada. Para hacer esto será desarrollado un módulo coordinador y un módulo de seguimiento de objetivos como parte de la TBRA (Test Bench for Robotics and Autonomy) desarrollado previamente por Thales Alenia Space Italia y la Universidad de Genova. Todas las técnicas y métodos van a ser implementados en el lenguaje C++. Los módulos desarrollados serán probados usando los dos robots descritos en la tesis, el TBRA Robotic Platform y el Pioneer 3-AT.The technics and methods developed in this thesis are going to be focused on solving the problems of relative localization amongst robots and their movement maintaining a predetermined formation. To do this, it is going to be developed a coordinator module and a target tracking module, as a part of the TBRA (Test Bench for Robotics and Autonomy) previously developed in Thales Alenia Space Italia and the University of Genoa. All the technics and methods are going to be implemented in C++ language. The developed modules are going to be tested using the two robots described in the thesis, the TBRA Robotic Platform and the Pioneer 3-AT.Ingeniero (a) ElectrónicoPregrad

    Toward an object-based semantic memory for long-term operation of mobile service robots

    Get PDF
    Throughout a lifetime of operation, a mobile service robot needs to acquire, store and update its knowledge of a working environment. This includes the ability to identify and track objects in different places, as well as using this information for interaction with humans. This paper introduces a long-term updating mechanism, inspired by the modal model of human memory, to enable a mobile robot to maintain its knowledge of a changing environment. The memory model is integrated with a hybrid map that represents the global topology and local geometry of the environment, as well as the respective 3D location of objects. We aim to enable the robot to use this knowledge to help humans by suggesting the most likely locations of specific objects in its map. An experiment using omni-directional vision demonstrates the ability to track the movements of several objects in a dynamic environment over an extended period of time

    Moving Object Detection and Tracking for Video Surveillance: A Review

    Get PDF
    This paper presents a review and systematic study on the moving object detection and surveillance of the video as it is an important and challenging task in many computer vision applications, such as human detection, vehicles detection, threat, and security. Video surveillance is a dynamic environment, especially for human and vehicles and for specific object in case of security is one of the current challenging research topics in computer vision. It is a key technology to fight against terrorism, crime, public safety and for efficient management of accidents and crime scene going on now days. The paper also presents the concept of real time implementation computing task in video surveillances system. In this review paper various methods are discussed were evaluation of order to access how well they can detect moving object in an outdoor/indoor section in real time situation
    corecore