85 research outputs found

    Robot Mapping and Navigation in Real-World Environments

    Get PDF
    Robots can perform various tasks, such as mapping hazardous sites, taking part in search-and-rescue scenarios, or delivering goods and people. Robots operating in the real world face many challenges on the way to the completion of their mission. Essential capabilities required for the operation of such robots are mapping, localization and navigation. Solving all of these tasks robustly presents a substantial difficulty as these components are usually interconnected, i.e., a robot that starts without any knowledge about the environment must simultaneously build a map, localize itself in it, analyze the surroundings and plan a path to efficiently explore an unknown environment. In addition to the interconnections between these tasks, they highly depend on the sensors used by the robot and on the type of the environment in which the robot operates. For example, an RGB camera can be used in an outdoor scene for computing visual odometry, or to detect dynamic objects but becomes less useful in an environment that does not have enough light for cameras to operate. The software that controls the behavior of the robot must seamlessly process all the data coming from different sensors. This often leads to systems that are tailored to a particular robot and a particular set of sensors. In this thesis, we challenge this concept by developing and implementing methods for a typical robot navigation pipeline that can work with different types of the sensors seamlessly both, in indoor and outdoor environments. With the emergence of new range-sensing RGBD and LiDAR sensors, there is an opportunity to build a single system that can operate robustly both in indoor and outdoor environments equally well and, thus, extends the application areas of mobile robots. The techniques presented in this thesis aim to be used with both RGBD and LiDAR sensors without adaptations for individual sensor models by using range image representation and aim to provide methods for navigation and scene interpretation in both static and dynamic environments. For a static world, we present a number of approaches that address the core components of a typical robot navigation pipeline. At the core of building a consistent map of the environment using a mobile robot lies point cloud matching. To this end, we present a method for photometric point cloud matching that treats RGBD and LiDAR sensors in a uniform fashion and is able to accurately register point clouds at the frame rate of the sensor. This method serves as a building block for the further mapping pipeline. In addition to the matching algorithm, we present a method for traversability analysis of the currently observed terrain in order to guide an autonomous robot to the safe parts of the surrounding environment. A source of danger when navigating difficult to access sites is the fact that the robot may fail in building a correct map of the environment. This dramatically impacts the ability of an autonomous robot to navigate towards its goal in a robust way, thus, it is important for the robot to be able to detect these situations and to find its way home not relying on any kind of map. To address this challenge, we present a method for analyzing the quality of the map that the robot has built to date, and safely returning the robot to the starting point in case the map is found to be in an inconsistent state. The scenes in dynamic environments are vastly different from the ones experienced in static ones. In a dynamic setting, objects can be moving, thus making static traversability estimates not enough. With the approaches developed in this thesis, we aim at identifying distinct objects and tracking them to aid navigation and scene understanding. We target these challenges by providing a method for clustering a scene taken with a LiDAR scanner and a measure that can be used to determine if two clustered objects are similar that can aid the tracking performance. All methods presented in this thesis are capable of supporting real-time robot operation, rely on RGBD or LiDAR sensors and have been tested on real robots in real-world environments and on real-world datasets. All approaches have been published in peer-reviewed conference papers and journal articles. In addition to that, most of the presented contributions have been released publicly as open source software

    Unevenness Point Descriptor for Terrain Analysis in Mobile Robot Applications

    Get PDF
    In recent years, the use of imaging sensors that produce a three-dimensional representation of the environment has become an efficient solution to increase the degree of perception of autonomous mobile robots. Accurate and dense 3D point clouds can be generated from traditional stereo systems and laser scanners or from the new generation of RGB-D cameras, representing a versatile, reliable and cost-effective solution that is rapidly gaining interest within the robotics community. For autonomous mobile robots, it is critical to assess the traversability of the surrounding environment, especially when driving across natural terrain. In this paper, a novel approach to detect traversable and non-traversable regions of the environment from a depth image is presented that could enhance mobility and safety through integration with localization, control and planning methods. The proposed algorithm is based on the analysis of the normal vector of a surface obtained through Principal Component Analysis and it leads to the definition of a novel, so defined, Unevenness Point Descriptor. Experimental results, obtained with vehicles operating in indoor and outdoor environments, are presented to validate this approach

    Haptic robot-environment interaction for self-supervised learning in ground mobility

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia Eletrotécnica e de ComputadoresThis dissertation presents a system for haptic interaction and self-supervised learning mechanisms to ascertain navigation affordances from depth cues. A simple pan-tilt telescopic arm and a structured light sensor, both fitted to the robot’s body frame, provide the required haptic and depth sensory feedback. The system aims at incrementally develop the ability to assess the cost of navigating in natural environments. For this purpose the robot learns a mapping between the appearance of objects, given sensory data provided by the sensor, and their bendability, perceived by the pan-tilt telescopic arm. The object descriptor, representing the object in memory and used for comparisons with other objects, is rich for a robust comparison and simple enough to allow for fast computations. The output of the memory learning mechanism allied with the haptic interaction point evaluation prioritize interaction points to increase the confidence on the interaction and correctly identifying obstacles, reducing the risk of the robot getting stuck or damaged. If the system concludes that the object is traversable, the environment change detection system allows the robot to overcome it. A set of field trials show the ability of the robot to progressively learn which elements of environment are traversable

    Robots for Exploration, Digital Preservation and Visualization of Archeological Sites

    Get PDF
    Monitoring and conservation of archaeological sites are important activities necessary to prevent damage or to perform restoration on cultural heritage. Standard techniques, like mapping and digitizing, are typically used to document the status of such sites. While these task are normally accomplished manually by humans, this is not possible when dealing with hard-to-access areas. For example, due to the possibility of structural collapses, underground tunnels like catacombs are considered highly unstable environments. Moreover, they are full of radioactive gas radon that limits the presence of people only for few minutes. The progress recently made in the artificial intelligence and robotics field opened new possibilities for mobile robots to be used in locations where humans are not allowed to enter. The ROVINA project aims at developing autonomous mobile robots to make faster, cheaper and safer the monitoring of archaeological sites. ROVINA will be evaluated on the catacombs of Priscilla (in Rome) and S. Gennaro (in Naples)

    Watch Your Step! Terrain Traversability for Robot Control

    Get PDF
    Watch your step! Or perhaps, watch your wheels. Whatever the robot is, if it puts its feet, tracks, or wheels in the wrong place, it might get hurt; and as robots are quickly going from structured and completely known environments towards uncertain and unknown terrain, the surface assessment becomes an essential requirement. As a result, future mobile robots cannot neglect the evaluation of terrain’s structure, according to their driving capabilities. With the objective of filling this gap, the focus of this study was laid on terrain analysis methods, which can be used for robot control with particular reference to autonomous vehicles and mobile robots. Giving an overview of theory related to this topic, the investigation not only covers hardware, such as visual sensors or laser scanners, but also space descriptions, such as digital elevation models and point descriptors, introducing new aspects and characterization of terrain assessment. During the discussion, a wide number of examples and methodologies are exposed according to different tools and sensors, including the description of a recent method of terrain assessment using normal vectors analysis. Indeed, normal vectors has demonstrated great potentialities in the field of terrain irregularity assessment in both on‐road and off‐road environments

    Traversability Estimation from RGB Images and Height Map

    Get PDF
    Odhad traversability je důležitá úloha pro autonomní mobilní roboty. Ti by měli být schopni rozhodnout o traversabilitě svého okolí, aby byli bezpečně naváděni. V této práci je navržena metoda spojení hloubkových měření v podobě výškových map s RGB obrázky. Náš přístup se skládá z nejmodernějších metod analýzy obou, tedy konvolučních neuronových sítí. Používáme self-supervised učení konvolučních neuronových sítí na reálných datasetech. Datasety se skládají z několika různých prostředí, jako jsou doly, chodby, schodiště a další běžné venkovní terény (tráva, cesta, chodník). Naše síť poskytuje správný odhad na jednodušších terénech, jako jsou chodby nebo rovný terén, a přijatelné výsledky pro náročný terén, jako schody nebo měkké překážky (např. vysoká tráva).Traversability estimation is an important task for autonomous mobile robots. They should be able to decide about traversability in their surroundings to be safely navigated. In this thesis, the method of merging depth measurements as heightmaps with RGB images is proposed. Our approach consists from state-of-the-art methods for analysis of both, which are convolutional neural networks. We used self-supervised learning of convolutional neural networks on real datasets. Datasets consist from various environments such as mines, hallways, staircases and other common outdoor terrains (grass, road, pavement). Our network provides correct estimation for easier terrain such as hallways or flat terrain, and acceptable results as for challenging environments such as staircases or soft obstacles (e. g. high grass)

    Assistente de navegação com apontador laser para conduzir cadeiras de rodas robotizadas

    Get PDF
    Orientador: Eric RohmerDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: As soluções de robótica assistida ajudam as pessoas a recuperar sua mobilidade e autonomia perdidas em suas vidas diárias. Este documento apresenta um assistente de navegação de baixo custo projetado para pessoas tetraplégicas para dirigir uma cadeira de rodas robotizada usando a combinação da orientação da cabeça e expressões faciais (sorriso e sobrancelhas para cima) para enviar comandos para a cadeira. O assistente fornece dois modos de navegação: manual e semi-autônomo. Na navegação manual, uma webcam normal com o algoritmo OpenFace detecta a orientação da cabeça do usuário e expressões faciais (sorriso, sobrancelhas para cima) para compor comandos e atuar diretamente nos movimentos da cadeira de rodas (parar, ir à frente, virar à direita, virar à esquerda). No modo semi-autônomo, o usuário controla um laser pan-tilt com a cabeça para apontar o destino desejado no solo e valida com o comando sobrancelhas para cima que faz com que a cadeira de rodas robotizada realize uma rotação seguida de um deslocamento linear para o alvo escolhido. Embora o assistente precise de melhorias, os resultados mostraram que essa solução pode ser uma tecnologia promissora para pessoas paralisadas do pescoço para controlar uma cadeira de rodas robotizadaAbstract: Assistive robotics solutions help people to recover their lost mobility and autonomy in their daily life. This document presents a low-cost navigation assistant designed for people paralyzed from down the neck to drive a robotized wheelchair using the combination of the head's posture and facial expressions (smile and eyebrows up) to send commands to the chair. The assistant provides two navigation modes: manual and semi-autonomous. In the manual navigation, a regular webcam with the OpenFace algorithm detects the user's head orientation and facial expressions (smile, eyebrows up) to compose commands and actuate directly on the wheelchair movements (stop, go front, turn right, turn left). In the semi-autonomous, the user controls a pan-tilt laser with his/her head to point the desired destination on the ground and validates with eyebrows up command which makes the robotized wheelchair performs a rotation followed by a linear displacement to the chosen target. Although the assistant need improvements, results have shown that this solution may be a promising technology for people paralyzed from down the neck to control a robotized wheelchairMestradoEngenharia de ComputaçãoMestre em Engenharia ElétricaCAPE
    corecore