165 research outputs found

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    High-Precision Localization Using Visual Landmarks Fused with Range Data

    Get PDF
    Abstract Visual landmark matching with a pre-built landmark database is a popular technique for localization. Traditionally, landmar

    A Novel Approach To Intelligent Navigation Of A Mobile Robot In A Dynamic And Cluttered Indoor Environment

    Get PDF
    The need and rationale for improved solutions to indoor robot navigation is increasingly driven by the influx of domestic and industrial mobile robots into the market. This research has developed and implemented a novel navigation technique for a mobile robot operating in a cluttered and dynamic indoor environment. It divides the indoor navigation problem into three distinct but interrelated parts, namely, localization, mapping and path planning. The localization part has been addressed using dead-reckoning (odometry). A least squares numerical approach has been used to calibrate the odometer parameters to minimize the effect of systematic errors on the performance, and an intermittent resetting technique, which employs RFID tags placed at known locations in the indoor environment in conjunction with door-markers, has been developed and implemented to mitigate the errors remaining after the calibration. A mapping technique that employs a laser measurement sensor as the main exteroceptive sensor has been developed and implemented for building a binary occupancy grid map of the environment. A-r-Star pathfinder, a new path planning algorithm that is capable of high performance both in cluttered and sparse environments, has been developed and implemented. Its properties, challenges, and solutions to those challenges have also been highlighted in this research. An incremental version of the A-r-Star has been developed to handle dynamic environments. Simulation experiments highlighting properties and performance of the individual components have been developed and executed using MATLAB. A prototype world has been built using the WebotsTM robotic prototyping and 3-D simulation software. An integrated version of the system comprising the localization, mapping and path planning techniques has been executed in this prototype workspace to produce validation results

    LSH-RANSAC: An Incremental Scheme for Scalable Localization

    Get PDF
    This paper addresses the problem of feature- based robot localization in large-size environments. With recent progress in SLAM techniques, it has become crucial for a robot to estimate the self-position in real-time with respect to a large- size map that can be incrementally build by other mapper robots. Self-localization using large-size maps have been studied in litelature, but most of them assume that a complete map is given prior to the self-localization task. In this paper, we present a novel scheme for robot localization as well as map representation that can successfully work with large-size and incremental maps. This work combines our two previous works on incremental methods, iLSH and iRANSAC, for appearance- based and position-based localization

    Distributed Robotic Vision for Calibration, Localisation, and Mapping

    Get PDF
    This dissertation explores distributed algorithms for calibration, localisation, and mapping in the context of a multi-robot network equipped with cameras and onboard processing, comparing against centralised alternatives where all data is transmitted to a singular external node on which processing occurs. With the rise of large-scale camera networks, and as low-cost on-board processing becomes increasingly feasible in robotics networks, distributed algorithms are becoming important for robustness and scalability. Standard solutions to multi-camera computer vision require the data from all nodes to be processed at a central node which represents a significant single point of failure and incurs infeasible communication costs. Distributed solutions solve these issues by spreading the work over the entire network, operating only on local calculations and direct communication with nearby neighbours. This research considers a framework for a distributed robotic vision platform for calibration, localisation, mapping tasks where three main stages are identified: an initialisation stage where calibration and localisation are performed in a distributed manner, a local tracking stage where visual odometry is performed without inter-robot communication, and a global mapping stage where global alignment and optimisation strategies are applied. In consideration of this framework, this research investigates how algorithms can be developed to produce fundamentally distributed solutions, designed to minimise computational complexity whilst maintaining excellent performance, and designed to operate effectively in the long term. Therefore, three primary objectives are sought aligning with these three stages

    Autonomous Vehicle Control

    Get PDF
    A practical knowledge base in the emerging field of Robotics was developed and used to create a framework for further experiments. The framework was designed such that modular parts could be replaced, allowing for future development without reinventing the wheel . To prove the framework, a semi-autonomous robot was implemented, including stereo vision sensors, an inertial navigation system, and a simultaneous localization and mapping algorithm

    Enhancing RGB-D SLAM Using Deep Learning

    Get PDF

    Analysing Large-scale Surveillance Video

    Get PDF
    Analysing large-scale surveillance video has drawn signi cant attention because drone technology and high-resolution sensors are rapidly improving. The mobility of drones makes it possible to monitor a broad range of the environment, but it introduces a more di cult problem of identifying the objects of interest. This thesis aims to detect the moving objects (mostly vehicles) using the idea of background subtraction. Building a decent background is the key to success during the process. We consider two categories of surveillance videos in this thesis: when the scene is at and when pronounced parallax exists. After reviewing several global motion estimation approaches, we propose a novel cost function, the log-likelihood of the student t-distribution, to estimate the background motion between two frames. The proposed idea enables the estimation process to be e cient and robust with auto-generated parameters. Since the particle lter is useful in various subjects, it is investigated in this thesis. An improvement to particle lters, combining near-optimal proposal and Rao-Blackwellisation, is discussed to increase the e ciency when dealing with non-linear problems. Such improvement is used to solve visual simultaneous localisation and mapping (SLAM) problems and we call it RB2-PF. Its superiority is evident in both simulations of 2D SLAM and real datasets of visual odometry problems. Finally, RB2-PF based visual odometry is the key component to detect moving objects from surveillance videos with pronounced parallax. The idea is to consider multiple planes in the scene to improve the background motion estimation. Experiments have shown that false alarms signi cantly reduced. With the landmark information, a ground plane can be worked out. A near-constant velocity model can be applied after mapping the detections on the ground plane regardless of the position and orientation of the camera. All the detection results are nally processed by a multi-target tracker, the Gaussian mixture probabilistic hypothesis density (GM-PHD) lter, to generate tracks

    Pre-Trained Driving in Localized Surroundings with Semantic Radar Information and Machine Learning

    Get PDF
    Entlang der Signalverarbeitungskette von Radar Detektionen bis zur Fahrzeugansteuerung, diskutiert diese Arbeit eine semantischen Radar Segmentierung, einen darauf aufbauenden Radar SLAM, sowie eine im Verbund realisierte autonome Parkfunktion. Die Radarsegmentierung der (statischen) Umgebung wird durch ein Radar-spezifisches neuronales Netzwerk RadarNet erreicht. Diese Segmentierung ermöglicht die Entwicklung des semantischen Radar Graph-SLAM SERALOC. Auf der Grundlage der semantischen Radar SLAM Karte wird eine beispielhafte autonome ParkfunktionalitĂ€t in einem realen VersuchstrĂ€ger umgesetzt. Entlang eines aufgezeichneten Referenzfades parkt die Funktion ausschließlich auf Basis der Radar Wahrnehmung mit bisher unerreichter Positioniergenauigkeit. Im ersten Schritt wird ein Datensatz von 8.2 · 10^6 punktweise semantisch gelabelten Radarpunktwolken ĂŒber eine Strecke von 2507.35m generiert. Es sind keine vergleichbaren DatensĂ€tze dieser Annotationsebene und Radarspezifikation öffentlich verfĂŒgbar. Das ĂŒberwachte Training der semantischen Segmentierung RadarNet erreicht 28.97% mIoU auf sechs Klassen. Außerdem wird ein automatisiertes Radar-Labeling-Framework SeRaLF vorgestellt, welches das Radarlabeling multimodal mittels Referenzkameras und LiDAR unterstĂŒtzt. FĂŒr die kohĂ€rente Kartierung wird ein Radarsignal-Vorfilter auf der Grundlage einer Aktivierungskarte entworfen, welcher Rauschen und andere dynamische Mehrwegreflektionen unterdrĂŒckt. Ein speziell fĂŒr Radar angepasstes Graph-SLAM-Frontend mit Radar-Odometrie Kanten zwischen Teil-Karten und semantisch separater NDT Registrierung setzt die vorgefilterten semantischen Radarscans zu einer konsistenten metrischen Karte zusammen. Die Kartierungsgenauigkeit und die Datenassoziation werden somit erhöht und der erste semantische Radar Graph-SLAM fĂŒr beliebige statische Umgebungen realisiert. Integriert in ein reales Testfahrzeug, wird das Zusammenspiel der live RadarNet Segmentierung und des semantischen Radar Graph-SLAM anhand einer rein Radar-basierten autonomen ParkfunktionalitĂ€t evaluiert. Im Durchschnitt ĂŒber 42 autonome Parkmanöver (∅3.73 km/h) bei durchschnittlicher ManöverlĂ€nge von ∅172.75m wird ein Median absoluter Posenfehler von 0.235m und End-Posenfehler von 0.2443m erreicht, der vergleichbare Radar-Lokalisierungsergebnisse um ≈ 50% ĂŒbertrifft. Die Kartengenauigkeit von verĂ€nderlichen, neukartierten Orten ĂŒber eine Kartierungsdistanz von ∅165m ergibt eine ≈ 56%-ige Kartenkonsistenz bei einer Abweichung von ∅0.163m. FĂŒr das autonome Parken wurde ein gegebener Trajektorienplaner und Regleransatz verwendet

    Cartographie dense basée sur une représentation compacte RGB-D dédiée à la navigation autonome

    Get PDF
    Our aim is concentrated around building ego-centric topometric maps represented as a graph of keyframe nodes which can be efficiently used by autonomous agents. The keyframe nodes which combines a spherical image and a depth map (augmented visual sphere) synthesises information collected in a local area of space by an embedded acquisition system. The representation of the global environment consists of a collection of augmented visual spheres that provide the necessary coverage of an operational area. A "pose" graph that links these spheres together in six degrees of freedom, also defines the domain potentially exploitable for navigation tasks in real time. As part of this research, an approach to map-based representation has been proposed by considering the following issues : how to robustly apply visual odometry by making the most of both photometric and ; geometric information available from our augmented spherical database ; how to determine the quantity and optimal placement of these augmented spheres to cover an environment completely ; how tomodel sensor uncertainties and update the dense infomation of the augmented spheres ; how to compactly represent the information contained in the augmented sphere to ensure robustness, accuracy and stability along an explored trajectory by making use of saliency maps.Dans ce travail, nous proposons une reprĂ©sentation efficace de l’environnement adaptĂ©e Ă  la problĂ©matique de la navigation autonome. Cette reprĂ©sentation topomĂ©trique est constituĂ©e d’un graphe de sphĂšres de vision augmentĂ©es d’informations de profondeur. Localement la sphĂšre de vision augmentĂ©e constitue une reprĂ©sentation Ă©gocentrĂ©e complĂšte de l’environnement proche. Le graphe de sphĂšres permet de couvrir un environnement de grande taille et d’en assurer la reprĂ©sentation. Les "poses" Ă  6 degrĂ©s de libertĂ© calculĂ©es entre sphĂšres sont facilement exploitables par des tĂąches de navigation en temps rĂ©el. Dans cette thĂšse, les problĂ©matiques suivantes ont Ă©tĂ© considĂ©rĂ©es : Comment intĂ©grer des informations gĂ©omĂ©triques et photomĂ©triques dans une approche d’odomĂ©trie visuelle robuste ; comment dĂ©terminer le nombre et le placement des sphĂšres augmentĂ©es pour reprĂ©senter un environnement de façon complĂšte ; comment modĂ©liser les incertitudes pour fusionner les observations dans le but d’augmenter la prĂ©cision de la reprĂ©sentation ; comment utiliser des cartes de saillances pour augmenter la prĂ©cision et la stabilitĂ© du processus d’odomĂ©trie visuelle
    • 

    corecore