463 research outputs found

    Technical Report on Autonomous Mobile Robot navigation

    Get PDF

    Towards autonomous localization and mapping of AUVs: a survey

    Get PDF
    Purpose The main purpose of this paper is to investigate two key elements of localization and mapping of Autonomous Underwater Vehicle (AUV), i.e. to overview various sensors and algorithms used for underwater localization and mapping, and to make suggestions for future research. Design/methodology/approach The authors first review various sensors and algorithms used for AUVs in the terms of basic working principle, characters, their advantages and disadvantages. The statistical analysis is carried out by studying 35 AUV platforms according to the application circumstances of sensors and algorithms. Findings As real-world applications have different requirements and specifications, it is necessary to select the most appropriate one by balancing various factors such as accuracy, cost, size, etc. Although highly accurate localization and mapping in an underwater environment is very difficult, more and more accurate and robust navigation solutions will be achieved with the development of both sensors and algorithms. Research limitations/implications This paper provides an overview of the state of art underwater localisation and mapping algorithms and systems. No experiments are conducted for verification. Practical implications The paper will give readers a clear guideline to find suitable underwater localisation and mapping algorithms and systems for their practical applications in hand. Social implications There is a wide range of audiences who will benefit from reading this comprehensive survey of autonomous localisation and mapping of UAVs. Originality/value The paper will provide useful information and suggestions to research students, engineers and scientists who work in the field of autonomous underwater vehicles

    Underwater Exploration and Mapping

    Get PDF
    This paper analyzes the open challenges of exploring and mapping in the underwater realm with the goal of identifying research opportunities that will enable an Autonomous Underwater Vehicle (AUV) to robustly explore different environments. A taxonomy of environments based on their 3D structure is presented together with an analysis on how that influences the camera placement. The difference between exploration and coverage is presented and how they dictate different motion strategies. Loop closure, while critical for the accuracy of the resulting map, proves to be particularly challenging due to the limited field of view and the sensitivity to viewing direction. Experimental results of enforcing loop closures in underwater caves demonstrate a novel navigation strategy. Dense 3D mapping, both online and offline, as well as other sensor configurations are discussed following the presented taxonomy. Experimental results from field trials illustrate the above analysis.acceptedVersio

    Model based Kalman Filter Mobile Robot Self-Localization

    Get PDF

    Underwater localization using imaging sonars in 3D environments

    Get PDF
    This work proposes a localization method using a mechanically scanned imaging sonar (MSIS), which stands out by its low cost and weight. The proposed method implements a Particle Filter, a Bayesian Estimator, and introduces a measurement model based on sonar simulation theory. To the best of author’s knowledge, there is no similar approach in the literature, as sonar simulation current methods target in syntethic data generation, mostly for object recognition . This stands as the major contribution of the thesis as allows the introduction of the computation of intensity values provided by imaging sonars, while maitaining compability with the already used methods, such as range extraction. Simulations shows the efficiency of the method as well its viability to the utilization of imaging sonar in underwater localization. The new approach make possible, under certain constraints, the extraction of 3D information from a sensor considered, in the literature, as 2D and also in situations where there is no reference at the same horizontal plane of the sensor transducer scanning axis. The localization in complex 3D environment is also an advantage provided by the proposed method.Este trabalho propõe um método de localização utilizando um sonar do tipo MSIS (Mechanically Scanned Imaging Sonar ), o qual se destaca por seu baixo custo e peso. O método implementa um filtro de partículas, um estimador Bayesiano, e introduz um modelo de medição baseado na teoria de simulação de sonares. No conhecimento do autor não há uma abordagem similar na literatura, uma vez que os métodos atuais de simulação de sonar visam a geração de dados sintéticos para o reconhecimento de objetos. Esta é a maior contribuição da tese pois permite a a computação dos valores de intensidade fornecidos pelos sonares do tipo imaging e ao mesmo tempo é compatível com os métodos já utilizados, como extração de distância. Simulações mostram o bom desempenho do método, assim como sua viabilidade para o uso de imaging sonars na localização submarina. A nova abordagem tornou possível, sob certas restrições, a extração de informações 3D de um sensor considerado, na literatura, como somente 2D e também em situações em que não há nehnuma referência no mesmo plano horizontal do eixo de escaneamento do transdutor. A localização em ambientes 3D complexos é também uma vantagem proporcionada pelo método proposto

    Enhanced concurrent mapping and localisation using forward-looking sonar

    Get PDF

    A sensor fusion layer to cope with reduced visibility in SLAM

    Get PDF
    Mapping and navigating with mobile robots in scenarios with reduced visibility, e.g. due to smoke, dust, or fog, is still a big challenge nowadays. In spite of the tremendous advance on Simultaneous Localization and Mapping (SLAM) techniques for the past decade, most of current algorithms fail in those environments because they usually rely on optical sensors providing dense range data, e.g. laser range finders, stereo vision, LIDARs, RGB-D, etc., whose measurement process is highly disturbed by particles of smoke, dust, or steam. This article addresses the problem of performing SLAM under reduced visibility conditions by proposing a sensor fusion layer which takes advantage from complementary characteristics between a laser range finder (LRF) and an array of sonars. This sensor fusion layer is ultimately used with a state-of-the-art SLAM technique to be resilient in scenarios where visibility cannot be assumed at all times. Special attention is given to mapping using commercial off-the-shelf (COTS) sensors, namely arrays of sonars which, being usually available in robotic platforms, raise technical issues that were investigated in the course of this work. Two sensor fusion methods, a heuristic method and a fuzzy logic-based method, are presented and discussed, corresponding to different stages of the research work conducted. The experimental validation of both methods with two different mobile robot platforms in smoky indoor scenarios showed that they provide a robust solution, using only COTS sensors, for adequately coping with reduced visibility in the SLAM process, thus decreasing significantly its impact in the mapping and localization results obtained

    Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots

    Full text link
    Safety is paramount for mobile robotic platforms such as self-driving cars and unmanned aerial vehicles. This work is devoted to a task that is indispensable for safety yet was largely overlooked in the past -- detecting obstacles that are of very thin structures, such as wires, cables and tree branches. This is a challenging problem, as thin objects can be problematic for active sensors such as lidar and sonar and even for stereo cameras. In this work, we propose to use video sequences for thin obstacle detection. We represent obstacles with edges in the video frames, and reconstruct them in 3D using efficient edge-based visual odometry techniques. We provide both a monocular camera solution and a stereo camera solution. The former incorporates Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter enjoys a novel, purely vision-based solution. Experiments demonstrated that the proposed methods are fast and able to detect thin obstacles robustly and accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio

    A sensor fusion layer to cope with reduced visibility in SLAM

    Get PDF
    Mapping and navigating with mobile robots in scenarios with reduced visibility, e.g. due to smoke, dust, or fog, is still a big challenge nowadays. In spite of the tremendous advance on Simultaneous Localization and Mapping (SLAM) techniques for the past decade, most of current algorithms fail in those environments because they usually rely on optical sensors providing dense range data, e.g. laser range finders, stereo vision, LIDARs, RGB-D, etc., whose measurement process is highly disturbed by particles of smoke, dust, or steam. This article addresses the problem of performing SLAM under reduced visibility conditions by proposing a sensor fusion layer which takes advantage from complementary characteristics between a laser range finder (LRF) and an array of sonars. This sensor fusion layer is ultimately used with a state-of-the-art SLAM technique to be resilient in scenarios where visibility cannot be assumed at all times. Special attention is given to mapping using commercial off-the-shelf (COTS) sensors, namely arrays of sonars which, being usually available in robotic platforms, raise technical issues that were investigated in the course of this work. Two sensor fusion methods, a heuristic method and a fuzzy logic-based method, are presented and discussed, corresponding to different stages of the research work conducted. The experimental validation of both methods with two different mobile robot platforms in smoky indoor scenarios showed that they provide a robust solution, using only COTS sensors, for adequately coping with reduced visibility in the SLAM process, thus decreasing significantly its impact in the mapping and localization results obtained
    corecore