4,031 research outputs found

    Localization in highly dynamic environments using dual-timescale NDT-MCL

    Get PDF
    Industrial environments are rarely static and often their configuration is continuously changing due to the material transfer flow. This is a major challenge for infrastructure free localization systems. In this paper we address this challenge by introducing a localization approach that uses a dual- timescale approach. The proposed approach - Dual-Timescale Normal Distributions Transform Monte Carlo Localization (DT- NDT-MCL) - is a particle filter based localization method, which simultaneously keeps track of the pose using an apriori known static map and a short-term map. The short-term map is continuously updated and uses Normal Distributions Transform Occupancy maps to maintain the current state of the environment. A key novelty of this approach is that it does not have to select an entire timescale map but rather use the best timescale locally. The approach has real-time performance and is evaluated using three datasets with increasing levels of dynamics. We compare our approach against previously pro- posed NDT-MCL and commonly used SLAM algorithms and show that DT-NDT-MCL outperforms competing algorithms with regards to accuracy in all three test cases.Postprint (author’s final draft

    Semantic-assisted 3D Normal Distributions Transform for scan registration in environments with limited structure

    Get PDF
    Point cloud registration is a core problem of many robotic applications, including simultaneous localization and mapping. The Normal Distributions Transform (NDT) is a method that fits a number of Gaussian distributions to the data points, and then uses this transform as an approximation of the real data, registering a relatively small number of distributions as opposed to the full point cloud. This approach contributes to NDT’s registration robustness and speed but leaves room for improvement in environments of limited structure. To address this limitation we propose a method for the introduction of semantic information extracted from the point clouds into the registration process. The paper presents a large scale experimental evaluation of the algorithm against NDT on two publicly available benchmark data sets. For the purpose of this test a measure of smoothness is used for the semantic partitioning of the point clouds. The results indicate that the proposed method improves the accuracy, robustness and speed of NDT registration, especially in unstructured environments, making NDT suitable for a wider range of applications

    Localization of Mobile Robot Using Multiple Sensors

    Get PDF
    Tato práce se věnuje celoživotnímu určování polohy mobilního robotu, který je vybavený různými senzory. Informace o poloze robotu a mapa jsou nezbytné pro zajištění autonomního pohybu. Cílem je implementovat metodu řešící problém zvaný Simultání lokalizace a mapování pomocí přístupu využívající Transformaci normálního rozdělení. Důraz je kladen na schopnost využít CAD výkresy prostředí jako počáteční mapu. Práce zahrnuje princip metody, popis implementace a zhodnocení výsledků, které bylo zaměřeno na rozdíly v lokalizaci a mapování s využitím CAD výkresů a bez nich.This thesis is dedicated to a lifelong localization of a mobile robot, which is equipped with the multiple sensors. The information about the robot position and the map are necessary for the autonomous movement. The goal of this thesis is implementing the method based on the Normal Distribution Transform for solving the problem called Simultaneous localization and mapping. The important requirement is the ability to use the CAD drawing of the environment as an initial map. The thesis contains the principle of the method, the description of the implementation, and the experiments evaluation. The experiments have been focused on the difference between the localization and mapping process with and without the CAD drawing

    PanopticNDT: Efficient and Robust Panoptic Mapping

    Full text link
    As the application scenarios of mobile robots are getting more complex and challenging, scene understanding becomes increasingly crucial. A mobile robot that is supposed to operate autonomously in indoor environments must have precise knowledge about what objects are present, where they are, what their spatial extent is, and how they can be reached; i.e., information about free space is also crucial. Panoptic mapping is a powerful instrument providing such information. However, building 3D panoptic maps with high spatial resolution is challenging on mobile robots, given their limited computing capabilities. In this paper, we propose PanopticNDT - an efficient and robust panoptic mapping approach based on occupancy normal distribution transform (NDT) mapping. We evaluate our approach on the publicly available datasets Hypersim and ScanNetV2. The results reveal that our approach can represent panoptic information at a higher level of detail than other state-of-the-art approaches while enabling real-time panoptic mapping on mobile robots. Finally, we prove the real-world applicability of PanopticNDT with qualitative results in a domestic application.Comment: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 202

    Global Localization based on Evolutionary Optimization Algorithms for Indoor and Underground Environments

    Get PDF
    Mención Internacional en el título de doctorA fully autonomous robot is defined by its capability to sense, understand and move within the environment to perform a specific task. These qualities are included within the concept of navigation. However, among them, a basic transcendent one is localization, the capacity of the system to know its position regarding its surroundings. Therefore, the localization issue could be defined as searching the robot’s coordinates and rotation angles within a known environment. In this thesis, the particular case of Global Localization is addressed, when no information about the initial position is known, and the robot relies only on its sensors. This work aims to develop several tools that allow the system to locate in the two most usual geometric map representations: occupancy maps and Point Clouds. The former divides the dimensional space into equally-sized cells coded with a binary value distinguishing between free and occupied space. Point Clouds define obstacles and environment features as a sparse set of points in the space, commonly measured through a laser sensor. In this work, various algorithms are presented to search for that position through laser measurements only, in contrast with more usual methods that combine external information with motion information of the robot, odometry. Therefore, the system is capable of finding its own position in indoor environments, with no necessity of external positioning and without the influence of the uncertainty that motion sensors typically induce. Our solution is addressed by implementing various stochastic optimization algorithms or Meta-heuristics, specifically those bio-inspired or commonly known as Evolutionary Algorithms. Inspired by natural phenomena, these algorithms are based on the evolution of a series of particles or population members towards a solution through the optimization of a cost or fitness function that defines the problem. The implemented algorithms are Differential Evolution, Particle Swarm Optimization, and Invasive Weed Optimization, which try to mimic the behavior of evolution through mutation, the movement of swarms or flocks of animals, and the colonizing behavior of invasive species of plants respectively. The different implementations address the necessity to parameterize these algorithms for a wide search space as a complete three-dimensional map, with exploratory behavior and the convergence conditions that terminate the search. The process is a recursive optimum estimation search, so the solution is unknown. These implementations address the optimum localization search procedure by comparing the laser measurements from the real position with the one obtained from each candidate particle in the known map. The cost function evaluates this similarity between real and estimated measurements and, therefore, is the function that defines the problem to optimize. The common approach in localization or mapping using laser sensors is to establish the mean square error or the absolute error between laser measurements as an optimization function. In this work, a different perspective is introduced by benefiting from statistical distance or divergences, utilized to describe the similarity between probability distributions. By modeling the laser sensor as a probability distribution over the measured distance, the algorithm can benefit from the asymmetries provided by these divergences to favor or penalize different situations. Hence, how the laser scans differ and not only how much can be evaluated. The results obtained in different maps, simulated and real, prove that the Global Localization issue is successfully solved through these methods, both in position and orientation. The implementation of divergence-based weighted cost functions provides great robustness and accuracy to the localization filters and optimal response before different sources and noise levels from sensor measurements, the environment, or the presence of obstacles that are not registered in the map.Lo que define a un robot completamente autónomo es su capacidad para percibir el entorno, comprenderlo y poder desplazarse en ´el para realizar las tareas encomendadas. Estas cualidades se engloban dentro del concepto de la navegación, pero entre todas ellas la más básica y de la que dependen en buena parte el resto es la localización, la capacidad del sistema de conocer su posición respecto al entorno que lo rodea. De esta forma el problema de la localización se podría definir como la búsqueda de las coordenadas de posición y los ángulos de orientación de un robot móvil dentro de un entorno conocido. En esta tesis se aborda el caso particular de la localización global, cuando no existe información inicial alguna y el sistema depende únicamente de sus sensores. El objetivo de este trabajo es el desarrollo de varias herramientas que permitan que el sistema encuentre la localización en la que se encuentra respecto a los dos tipos de mapa más comúnmente utilizados para representar el entorno: los mapas de ocupación y las nubes de puntos. Los primeros subdividen el espacio en celdas de igual tamaño cuyo valor se define de forma binaria entre espacio libre y ocupado. Las nubes de puntos definen los obstáculos como una serie dispersa de puntos en el espacio comúnmente medidos a través de un láser. En este trabajo se presentan varios algoritmos para la búsqueda de esa posición utilizando únicamente las medidas de este sensor láser, en contraste con los métodos más habituales que combinan información externa con información propia del movimiento del robot, la odometría. De esta forma el sistema es capaz de encontrar su posición en entornos interiores sin depender de posicionamiento externo y sin verse influenciado por la deriva típica que inducen los sensores de movimiento. La solución se afronta mediante la implementación de varios tipos de algoritmos estocásticos de optimización o Meta-heurísticas, en concreto entre los denominados bio-inspirados o comúnmente conocidos como Algoritmos Evolutivos. Estos algoritmos, inspirados en varios fenómenos de la naturaleza, se basan en la evolución de una serie de partículas o población hacia una solución en base a la optimización de una función de coste que define el problema. Los algoritmos implementados en este trabajo son Differential Evolution, Particle Swarm Optimization e Invasive Weed Optimization, que tratan de imitar el comportamiento de la evolución por mutación, el movimiento de enjambres o bandas de animales y la colonización por parte de especies invasivas de plantas respectivamente. Las distintas implementaciones abordan la necesidad de parametrizar estos algoritmos para un espacio de búsqueda muy amplio como es un mapa completo, con la necesidad de que su comportamiento sea muy exploratorio, así como las condiciones de convergencia que definen el fin de la búsqueda ya que al ser un proceso recursivo de estimación la solución no es conocida. Estos algoritmos plantean la forma de buscar la localización ´optima del robot mediante la comparación de las medidas del láser en la posición real con lo esperado en la posición de cada una de esas partículas teniendo en cuenta el mapa conocido. La función de coste evalúa esa semejanza entre las medidas reales y estimadas y por tanto, es la función que define el problema. Las funciones típicamente utilizadas tanto en mapeado como localización mediante el uso de sensores láser de distancia son el error cuadrático medio o el error absoluto entre distancia estimada y real. En este trabajo se presenta una perspectiva diferente, aprovechando las distancias estadísticas o divergencias, utilizadas para establecer la semejanza entre distribuciones probabilísticas. Modelando el sensor como una distribución de probabilidad entorno a la medida aportada por el láser, se puede aprovechar la asimetría de esas divergencias para favorecer o penalizar distintas situaciones. De esta forma se evalúa como difieren las medias y no solo cuanto. Los resultados obtenidos en distintos mapas tanto simulados como reales demuestran que el problema de la localización se resuelve con éxito mediante estos métodos tanto respecto al error de estimación de la posición como de la orientación del robot. El uso de las divergencias y su implementación en una función de coste ponderada proporciona gran robustez y precisión al filtro de localización y gran respuesta ante diferentes fuentes y niveles de ruido, tanto de la propia medida del sensor, del ambiente y de obstáculos no modelados en el mapa del entorno.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Fabio Bonsignorio.- Secretario: María Dolores Blanco Rojas.- Vocal: Alberto Brunete Gonzále

    Adaptive Learning Terrain Estimation for Unmanned Aerial Vehicle Applications

    Get PDF
    For the past decade, terrain mapping research has focused on ground robots using occupancy grids and tree-like data structures, like Octomap and Quadtrees. Since flight vehicles have different constraints, ground-based terrain mapping research may not be directly applicable to the aerospace industry. To address this issue, Adaptive Learning Terrain Estimation algorithms have been developed with an aim towards aerospace applications. This thesis develops and tests Adaptive Learning Terrain Estimation algorithms using a custom test benchmark on representative aerospace cases: autonomous UAV landing and UAV flight through 3D urban environments. The fundamental objective of this thesis is to investigate the use of Adaptive Learning Terrain Estimation algorithms for aerospace applications and compare their performance to commonly used mapping techniques such as Quadtree and Octomap. To test the algorithms, point clouds were collected and registered in simulation and real environments. Then, the Adaptive Learning, Quadtree, and Octomap algorithms were applied to the data sets, both in real-time and offline. Finally, metrics of map size, accuracy, and running time were developed and implemented to quantify and compare the performance of the algorithms. The results show that Quadtree yields the computationally lightest maps, but it is not suitable for real-time implementation due to its lack of recursiveness. Adaptive Learning maps are computationally efficient due to the use of multiresolution grids. Octomap yields the most detailed maps, but it produces a high computational load. The results of the research show that Adaptive Learning algorithms have significant potential for real-time implementation in aerospace applications. Their low memory load and variable-sized grids make them viable candidates for future research and development
    corecore