388 research outputs found

    ArtPlanner: Robust Legged Robot Navigation in the Field

    Full text link
    Due to the highly complex environment present during the DARPA Subterranean Challenge, all six funded teams relied on legged robots as part of their robotic team. Their unique locomotion skills of being able to step over obstacles require special considerations for navigation planning. In this work, we present and examine ArtPlanner, the navigation planner used by team CERBERUS during the Finals. It is based on a sampling-based method that determines valid poses with a reachability abstraction and uses learned foothold scores to restrict areas considered safe for stepping. The resulting planning graph is assigned learned motion costs by a neural network trained in simulation to minimize traversal time and limit the risk of failure. Our method achieves real-time performance with a bounded computation time. We present extensive experimental results gathered during the Finals event of the DARPA Subterranean Challenge, where this method contributed to team CERBERUS winning the competition. It powered navigation of four ANYmal quadrupeds for 90 minutes of autonomous operation without a single planning or locomotion failure

    Validation of robotic navigation strategies in unstructured environments: from autonomous to reactive

    Get PDF
    The main topic of this master thesis is the validation of a navigation algorithm designed to perform autonomously in unstructured environments. Computer simulations and experimental tests with a mobile robot have allowed reaching the established objective. The presented approach is effective, consistent, and able to attain safe navigation with static and dynamic configurations. This work contains a survey of the principal navigation strategies and components. Afterwards, a recap of the history of robotics is briefly illustrated, emphasizing the description of mobile robotics and locomotion. Subsequently, it presents the development of an algorithm for autonomous navigation through an unknown environment for mobile robots. The algorithm seeks to compute trajectories that lead to a target unknown position without falling into a recurrent loop. The code has been entirely written and tested in MATLAB, using randomly generated obstacles of different sizes. The developed algorithm is used as a benchmark to analyze different predictive strategies for the navigation of mobile robots in the presence of environments not known a priori and overpopulated with obstacles. Then, an innovative algorithm for navigation, called NAPVIG, is described and analyzed. The algorithm has been built using ROS and tested in Gazebo real-time simulator. In order to achieve high performances, optimal parameters have been found tuning and simulating the algorithm in different environmental configurations. Finally, an experimental campaign in the SPARCS laboratory of the University of Padua enabled the validation of the chosen parameters

    3D Perception Based Lifelong Navigation of Service Robots in Dynamic Environments

    Get PDF
    Lifelong navigation of mobile robots is to ability to reliably operate over extended periods of time in dynamically changing environments. Historically, computational capacity and sensor capability have been the constraining factors to the richness of the internal representation of the environment that a mobile robot could use for navigation tasks. With affordable contemporary sensing technology available that provides rich 3D information of the environment and increased computational power, we can increasingly make use of more semantic environmental information in navigation related tasks.A navigation system has many subsystems that must operate in real time competing for computation resources in such as the perception, localization, and path planning systems. The main thesis proposed in this work is that we can utilize 3D information from the environment in our systems to increase navigational robustness without making trade-offs in any of the real time subsystems. To support these claims, this dissertation presents robust, real world 3D perception based navigation systems in the domains of indoor doorway detection and traversal, sidewalk-level outdoor navigation in urban environments, and global localization in large scale indoor warehouse environments.The discussion of these systems includes methods of 3D point cloud based object detection to find respective objects of semantic interest for the given navigation tasks as well as the use of 3D information in the navigational systems for purposes such as localization and dynamic obstacle avoidance. Experimental results for each of these applications demonstrate the effectiveness of the techniques for robust long term autonomous operation

    Vision-based guidance and control of a hovering vehicle in unknown environments

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2008.Includes bibliographical references (leaves 115-122).This thesis presents a methodology, architecture, hardware implementation, and results of a system capable of controlling and guiding a hovering vehicle in unknown environments, emphasizing cluttered indoor spaces. Six-axis inertial data and a low-resolution onboard camera yield sufficient information for image processing, Kalman filtering, and novel mapping algorithms to generate a, high-performance estimate of vehicle motion, as well as an accurate three-dimensional map of the environment. This combination of mapping and localization enables a quadrotor vehicle to autonomously navigate cluttered, unknown environments safely. Communication limitations are considered, and a hybrid control architecture is presented to demonstrate the feasibility of combining separated proactive offboard and reactive onboard planners simultaneously, including a detailed presentation of a novel reactive obstacle avoidance algorithm and preliminary results integrating the MIT Darpa Urban Challenge planner for high-level control. The RAVEN testbed is successfully employed as a prototyping facility for rapid development of these algorithms using emulated inertial data and offboard processing as a precursor to embedded development. An analysis of computational demand and a comparison of the emulated inertial system to an embedded sensor package demonstrates the feasibility of porting the onboard algorithms to an embedded autopilot. Finally, flight results using only the single camera and emulated inertial data for closed-loop trajectory following, environment mapping, and obstacle avoidance are presented and discussed.by Spencer Greg Ahrens.S.M

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    Camera Pose Estimation from Street-view Snapshots and Point Clouds

    Get PDF
    This PhD thesis targets on two research problems: (1) How to efficiently and robustly estimate the camera pose of a query image with a map that contains street-view snapshots and point clouds; (2) Given the estimated camera pose of a query image, how to create meaningful and intuitive applications with the map data. To conquer the first research problem, we systematically investigated indirect, direct and hybrid camera pose estimation strategies. We implemented state-of-the-art methods and performed comprehensive experiments in two public benchmark datasets considering outdoor environmental changes from ideal to extremely challenging cases. Our key findings are: (1) the indirect method is usually more accurate than the direct method when there are enough consistent feature correspondences; (2) The direct method is sensitive to initialization, but under extreme outdoor environmental changes, the mutual-information-based direct method is more robust than the feature-based methods; (3) The hybrid method combines the strength from both direct and indirect method and outperforms them in challenging datasets. To explore the second research problem, we considered inspiring and useful applications by exploiting the camera pose together with the map data. Firstly, we invented a 3D-map augmented photo gallery application, where images’ geo-meta data are extracted with an indirect camera pose estimation method and photo sharing experience is improved with the augmentation of 3D map. Secondly, we designed an interactive video playback application, where an indirect method estimates video frames’ camera pose and the video playback is augmented with a 3D map. Thirdly, we proposed a 3D visual primitive based indoor object and outdoor scene recognition method, where the 3D primitives are accumulated from the multiview images

    Estacionamento autónomo usando perceção 3D

    Get PDF
    Mestrado em Engenharia MecânicaEste trabalho enquadra-se no contexto da condução autónoma, e o objetivo principal consiste na deteção e realização de uma manobra de estacionamento paralelo por parte de um veículo não-holonómico à escala de 1:5, utilizando um ambiente de programação ROS. Numa primeira fase são detetados os possíveis lugares vagos com recurso a uma nuvem de pontos proveniente de uma câmara 3D (Kinect), analizando volumes ao lado do carro. Assim que é encontrado um lugar vazio, inicia-se o estudo de possíveis trajetórias de aproximação. Estas trajetórias são compostas e são geradas em modo offline. É escolhido o melhor caminho a seguir e, no final, envia-se uma mensagem de comando para o veículo executar a manobra. Os objetivos traçados foram alcançados com sucesso, uma vez que as manobras de estacionamento foram realizadas corretamente nas condições esperadas. Para trabalhos futuros, seria interessante migrar este algoritmo de procura para outros veículos e tipos de manobra.This work fits into the context of autonomous driving, and the main goal consists of the detection and execution of a parallel parking manoeuvre by a 1:5 scaled non-holonomic vehicle, using the ROS programming environment. In a first stage, the possible parking locations are detected by analysing a point cloud provided by a 3D camera (Kinect) and specifically by analysing volumes on the side of the car. Whenever an empty place is found, the study of possible paths of approach begins. These are composed trajectories, being generated offline. The path to follow is evaluated, and then the commands needed to the vehicle perform the selected path are sent. The outlined objectives were successfully achieved, since parking manoeuvres were performed correctly in the expected conditions. For future work, it would be interesting to migrate the search algorithm to other types of vehicles and manoeuvring

    Mapping, Path Following, and Perception with Long Range Passive UHF RFID for Mobile Robots

    Get PDF
    Service robots have shown an impressive potential in providing assistance and guidance in various environments, such as supermarkets, shopping malls, homes, airports, and libraries. Due to the low-cost and contactless way of communication, radio-frequency identification (RFID) technology provides a solution to overcome the difficulties (e.g. occlusions) that the traditional line of sight sensors (e.g. cameras and laser range finders) face. In this thesis, we address the applications of using passive ultra high frequency (UHF) RFID as a sensing technology for mobile robots in three fundamental tasks, namely mapping, path following, and tracking. An important task in the field of RFID is mapping, which aims at inferring the positions of RFID tags based on the measurements (i.e. the detections as well as the received signal strength) received by the RFID reader. The robot, which serves as an intelligent mobile carrier, is able to localize itself in a known environment based on the existing positioning techniques, such as laser-based Monte Carlo localization. The mapping process requires a probabilistic sensor model, which characterizes the likelihood of receiving a measurement, given the relative pose of the antenna and the tag. In this thesis, we address the problem of recovering from mapping failures of static RFID tags and localizing non-static RFID tags which do not move frequently using a particle filter. The usefulness of negative information (e.g. non-detections) is also examined in the context of mapping RFID tags. Moreover, we present a novel three dimensional (3D) sensor model to improve the mapping accuracy of RFID tags. In particular, using this new sensor model, we are able to localize the 3D position of an RFID tag by mounting two antennas at different heights on the robot. We additionally utilize negative information to improve the mapping accuracy, especially for the height estimation in our stereo antenna configuration. The model-based localization approach, which works as a dual to the mapping process, estimates the pose of the robot based on the sensor model as well as the given positions of RFID tags. The fingerprinting-based approach was shown to be superior to the model-based approach, since it is able to better capture the unpredictable radio frequency characteristics in the existing infrastructure. Here, we present a novel approach that combines RFID fingerprints and odometry information as an input of the motion control of a mobile robot for the purpose of path following in unknown environments. More precisely, we apply the teaching and playback scheme to perform this task. During the teaching stage, the robot is manually steered to move along a desired path. RFID measurements and the associated motion information are recorded in an online-fashion as reference data. In the second stage (i.e. playback stage), the robot follows this path autonomously by adjusting its pose according to the difference between the current RFIDmeasurements and the previously recorded reference measurements. Particularly, our approach needs no prior information about the distribution and positions of the tags, nor does it require a map of the environment. The proposed approach features a cost-effective alternative for mobile robot navigation if the robot is equipped with an RFID reader for inventory in RFID-tagged environments. The capability of a mobile robot to track dynamic objects is vital for efficiently interacting with its environment. Although a large number of researchers focus on the mapping of RFID tags, most of them only assume a static configuration of RFID tags and too little attention has been paid to dynamic ones. Therefore, we address the problem of tracking dynamic objects for mobile robots using RFID tags. In contrast to mapping of RFID tags, which aims at achieving a minimum mapping error, tracking does not only need a robust tracking performance, but also requires a fast reaction to the movement of the objects. To achieve this, we combine a two stage dynamic motion model with the dual particle filter, to capture the dynamic motion of the object and to quickly recover from failures in tracking. The state estimation from the particle filter is used in a combination with the VFH+ (Vector Field Histogram), which serves as a local path planner for obstacle avoidance, to guide the robot towards the target. This is then integrated into a framework, which allows the robot to search for both static and dynamic tags, follow it, and maintain the distance between them. [untranslated]Service-Roboter bergen ein großes Potential bei der Unterstützung, Beratung und Führung von Kunden oder Personal in verschiedenen Umgebungen wie zum Beispiel Supermärkten, Einkaufszentren, Wohnungen, Flughäfen und Bibliotheken. Durch die geringen Kosten und die kontaktlose Kommunikation ist die RFID Technologie in der Lage vorhandene Herausforderungen traditioneller sichtlinienbasierter Sensoren (z.B. Verdeckung beim Einsatz von Kameras oder Laser-Entfernungsmessern) zu lösen. In dieser Arbeit beschäftigen wir uns mit dem Einsatz von passivem Ultrahochfrequenz (UHF) RFID als Sensortechnologie für mobile Roboter hinsichtlich drei grundlegender Aufgabenstellungen Kartierung, Pfadverfolgung und Tracking. Kartierung nimmt eine wesentliche Rolle im Bereich der Robotik als auch beim Einsatz von RFID Sensoren ein. Hierbei ist das Ziel die Positionen von RFID-Tags anhand von Messungen (die Erfassung der Tags als solche und die Signalstärke) zu schätzen. Der Roboter, der als intelligenter mobiler Träger dient, ist in der Lage, sich selbst in einer bekannten Umgebung auf Grundlage der bestehenden Positionierungsverfahren, wie Laser-basierter Monte-Carlo Lokalisierung zurechtzufinden. Der Kartierungsprozess erfordert ein probabilistisches Sensormodell, das die Wahrscheinlichkeit beschreibt, ein Tag an einer gegebenen Position relativ zur RFID-Antenne (ggf. mit einer bestimmten Signalstärke) zu erkennen. Zentrale Aspekte dieser Arbeit sind die Regeneration bei fehlerhafter Kartierung statischer RFID-Tags und die Lokalisierung von nicht-statischen RFID-Tags. Auch wird die Verwendbarkeit negativer Informationen, wie z.B. das Nichterkennen von Transpondern, im Rahmen der RFID Kartierung untersucht. Darüber hinaus schlagen wir ein neues 3D-Sensormodell vor, welches die Genauigkeit der Kartierung von RFID-Tags verbessert. Durch die Montage von zwei Antennen auf verschiedenen Höhen des eingesetzten Roboters, erlaubt es dieses Modell im Besonderen, die 3D Positionen von Tags zu bestimmen. Dabei nutzen wir zusätzlich negative Informationen um die Genauigkeit der Kartierung zu erhöhen. Dank der Eindeutigkeit von RFID-Tags, ist es möglich die Lokalisierung eines mobilen Roboters ohne Mehrdeutigkeit zu bestimmen. Der modellbasierte Ansatz zur Lokalisierung schätzt die Pose des Roboters auf Basis des Sensormodells und den angegebenen Positionen der RFID-Tags. Es wurde gezeigt, dass der Fingerprinting-Ansatz dem modellbasierten Ansatz überlegen ist, da ersterer in der Lage ist, die unvorhersehbaren Funkfrequenzeigenschaften in der vorhandenen Infrastruktur zu erfassen. Hierfür präsentieren wir einen neuen Ansatz, der RFID Fingerprints und Odometrieinformationen für die Zwecke der Pfadverfolgung in unbekannten Umgebungen kombiniert. Dieser basiert auf dem Teaching-and-Playback-Schema. Während der Teaching-Phase wird der Roboter manuell gelenkt, um ihn entlang eines gewünschten Pfades zu bewegen. RFID-Messungen und die damit verbundenen Bewegungsinformationen werden als Referenzdaten aufgezeichnet. In der zweiten Phase, der Playback-Phase, folgt der Roboter diesem Pfad autonom. Der vorgeschlagene Ansatz bietet eine kostengünstige Alternative für die mobile Roboternavigation bei der Bestandsaufnahme in RFID-gekennzeichneten Umgebungen, wenn der Roboter mit einem RFID-Lesegerät ausgestattet ist. Die Fähigkeit eines mobilen Roboters dynamische Objekte zu verfolgen ist entscheidend für eine effiziente Interaktion mit der Umgebung. Obwohl sich viele Forscher mit der Kartierung von RFID-Tags befassen, nehmen die meisten eine statische Konfiguration der RFID-Tags an, nur wenige berücksichtigen dabei dynamische RFID-Tags. Wir wenden uns daher dem Problem der RFID basierten Verfolgung dynamischer Objekte mit mobilen Robotern zu. Im Gegensatz zur Kartierung von RFID-Tags, ist für die Verfolgung nicht nur eine stabile Erkennung notwendig, es ist zudem erforderlich schnell auf die Bewegung der Objekte reagieren zu können. Um dies zu erreichen, kombinieren wir ein zweistufiges dynamisches Bewegungsmodell mit einem dual-Partikelfilter. Die Zustandsschätzung des Partikelfilters wird in Kombination mit dem VFH+ (Vektorfeld Histogramm) verwendet, um den Roboter in Richtung des Ziels zu leiten. Hierdurch ist es dem Roboter möglich nach statischen und dynamischen Tags zu suchen, ihnen zu folgen und dabei einen gewissen Abstand zu halten

    Advances towards behaviour-based indoor robotic exploration

    Get PDF
    215 p.The main contributions of this research work remain in object recognition by computer vision, by one side, and in robot localisation and mapping by the other. The first contribution area of the research address object recognition in mobile robots. In this area, door handle recognition is of great importance, as it help the robot to identify doors in places where the camera is not able to view the whole door. In this research, a new two step algorithm is presented based on feature extraction that aimed at improving the extracted features to reduce the superfluous keypoints to be compared at the same time that it increased its efficiency by improving accuracy and reducing the computational time. Opposite to segmentation based paradigms, the feature extraction based two-step method can easily be generalized to other types of handles or even more, to other type of objects such as road signals. Experiments have shown very good accuracy when tested in real environments with different kind of door handles. With respect to the second contribution, a new technique to construct a topological map during the exploration phase a robot would perform on an unseen office-like environment is presented. Firstly a preliminary approach proposed to merge the Markovian localisation in a distributed system, which requires low storage and computational resources and is adequate to be applied in dynamic environments. In the same area, a second contribution to terrain inspection level behaviour based navigation concerned to the development of an automatic mapping method for acquiring the procedural topological map. The new approach is based on a typicality test called INCA to perform the so called loop-closing action. The method was integrated in a behaviour-based control architecture and tested in both, simulated and real robot/environment system. The developed system proved to be useful also for localisation purpose
    corecore