5,103 research outputs found

    Autonomous Robot Navigation with Rich Information Mapping in Nuclear Storage Environments

    Full text link
    This paper presents our approach to develop a method for an unmanned ground vehicle (UGV) to perform inspection tasks in nuclear environments using rich information maps. To reduce inspectors' exposure to elevated radiation levels, an autonomous navigation framework for the UGV has been developed to perform routine inspections such as counting containers, recording their ID tags and performing gamma measurements on some of them. In order to achieve autonomy, a rich information map is generated which includes not only the 2D global cost map consisting of obstacle locations for path planning, but also the location and orientation information for the objects of interest from the inspector's perspective. The UGV's autonomy framework utilizes this information to prioritize locations to navigate to perform the inspections. In this paper, we present our method of generating this rich information map, originally developed to meet the requirements of the International Atomic Energy Agency (IAEA) Robotics Challenge. We demonstrate the performance of our method in a simulated testbed environment containing uranium hexafluoride (UF6) storage container mock ups

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Navigace mobilních robotů v neznámém prostředí s využitím měření vzdáleností

    Get PDF
    The ability of a robot to navigate itself in the environment is a crucial step towards its autonomy. Navigation as a subtask of the development of autonomous robots is the subject of this thesis, focusing on the development of a method for simultaneous localization an mapping (SLAM) of mobile robots in six degrees of freedom (DOF). As a part of this research, a platform for 3D range data acquisition based on a continuously inclined laser rangefinder was developed. This platform is presented, evaluating the measurements and also presenting the robotic equipment on which the platform can be fitted. The localization and mapping task is equal to the registration of multiple 3D images into a common frame of reference. For this purpose, a method based on the Iterative Closest Point (ICP) algorithm was developed. First, the originally implemented SLAM method is presented, focusing on the time-wise performance and the registration quality issues introduced by the implemented algorithms. In order to accelerate and improve the quality of the time-demanding 6DOF image registration, an extended method was developed. The major extension is the introduction of a factorized registration, extracting 2D representations of vertical objects called leveled maps from the 3D point sets, ensuring these representations are 3DOF invariant. The extracted representations are registered in 3DOF using ICP algorithm, allowing pre-alignment of the 3D data for the subsequent robust 6DOF ICP based registration. The extended method is presented, showing all important modifications to the original method. The developed registration method was evaluated using real 3D data acquired in different indoor environments, examining the benefits of the factorization and other extensions as well as the performance of the original ICP based method. The factorization gives promising results compared to a single phase 6DOF registration in vertically structured environments. Also, the disadvantages of the method are discussed, proposing possible solutions. Finally, the future prospects of the research are presented.Schopnost lokalizace a navigace je podmínkou autonomního provozu mobilních robotů. Předmětem této disertační práce jsou navigační metody se zaměřením na metodu pro simultánní lokalizaci a mapování (SLAM) mobilních robotů v šesti stupních volnosti (6DOF). Nedílnou součástí tohoto výzkumu byl vývoj platformy pro sběr 3D vzdálenostních dat s využitím kontinuálně naklápěného laserového řádkového scanneru. Tato platforma byla vyvinuta jako samostatný modul, aby mohla být umístěna na různé šasi mobilních robotů. Úkol lokalizace a mapování je ekvivalentní registraci více 3D obrazů do společného souřadného systému. Pro tyto účely byla vyvinuta metoda založená na algoritmu Iterative Closest Point Algorithm (ICP). Původně implementovaná verze navigační metody využívá ICP s akcelerací pomocí kd-stromů přičemž jsou zhodnoceny její kvalitativní a výkonnostní aspekty. Na základě této analýzy byly vyvinuty rozšíření původní metody založené na ICP. Jednou z hlavních modifikací je faktorizace registračního procesu, kdy tato faktorizace je založena na redukci dat: vytvoření 2D „leveled“ map (ve smyslu jednoúrovňových map) ze 3D vzdálenostních obrazů. Pro tuto redukci je technologicky i algoritmicky zajištěna invariantnost těchto map vůči třem stupňům volnosti. Tyto redukované mapy jsou registrovány pomocí ICP ve zbylých třech stupních volnosti, přičemž získaná transformace je aplikována na 3D data za účelem před-registrace 3D obrazů. Následně je provedena robustní 6DOF registrace. Rozšířená metoda je v disertační práci v popsána spolu se všemi podstatnými modifikacemi. Vyvinutá metoda byla otestována a zhodnocena s využitím skutečných 3D vzdálenostních dat naměřených v různých vnitřních prostředích. Jsou zhodnoceny přínosy faktorizace a jiných modifikací ve srovnání s původní jednofázovou 6DOF registrací, také jsou zmíněny nevýhody implementované metody a navrženy způsoby jejich řešení. Nakonec následuje návrh budoucího výzkumu a diskuse o možnostech dalšího rozvoje.

    J-MOD2^{2}: Joint Monocular Obstacle Detection and Depth Estimation

    Full text link
    In this work, we propose an end-to-end deep architecture that jointly learns to detect obstacles and estimate their depth for MAV flight applications. Most of the existing approaches either rely on Visual SLAM systems or on depth estimation models to build 3D maps and detect obstacles. However, for the task of avoiding obstacles this level of complexity is not required. Recent works have proposed multi task architectures to both perform scene understanding and depth estimation. We follow their track and propose a specific architecture to jointly estimate depth and obstacles, without the need to compute a global map, but maintaining compatibility with a global SLAM system if needed. The network architecture is devised to exploit the joint information of the obstacle detection task, that produces more reliable bounding boxes, with the depth estimation one, increasing the robustness of both to scenario changes. We call this architecture J-MOD2^{2}. We test the effectiveness of our approach with experiments on sequences with different appearance and focal lengths and compare it to SotA multi task methods that jointly perform semantic segmentation and depth estimation. In addition, we show the integration in a full system using a set of simulated navigation experiments where a MAV explores an unknown scenario and plans safe trajectories by using our detection model
    corecore