164 research outputs found

    Integrating a low-cost mems imu into a laser-based slam for indoor mobile mapping

    Get PDF

    Combining visual features and Growing Neural Gas networks for robotic 3D SLAM

    Get PDF
    The use of 3D data in mobile robotics provides valuable information about the robot’s environment. Traditionally, stereo cameras have been used as a low-cost 3D sensor. However, the lack of precision and texture for some surfaces suggests that the use of other 3D sensors could be more suitable. In this work, we examine the use of two sensors: an infrared SR4000 and a Kinect camera. We use a combination of 3D data obtained by these cameras, along with features obtained from 2D images acquired from these cameras, using a Growing Neural Gas (GNG) network applied to the 3D data. The goal is to obtain a robust egomotion technique. The GNG network is used to reduce the camera error. To calculate the egomotion, we test two methods for 3D registration. One is based on an iterative closest points algorithm, and the other employs random sample consensus. Finally, a simultaneous localization and mapping method is applied to the complete sequence to reduce the global error. The error from each sensor and the mapping results from the proposed method are examined.This work has been supported by Grant DPI2009-07144 and DPI2013-40534-R from Ministerio de Ciencia e Innovacion of the Spanish Government, University of Alicante Projects GRE09-16 and GRE10-35, and Valencian Government Project GV/2011/034

    Efficient Continuous-Time SLAM for 3D Lidar-Based Online Mapping

    Full text link
    Modern 3D laser-range scanners have a high data rate, making online simultaneous localization and mapping (SLAM) computationally challenging. Recursive state estimation techniques are efficient but commit to a state estimate immediately after a new scan is made, which may lead to misalignments of measurements. We present a 3D SLAM approach that allows for refining alignments during online mapping. Our method is based on efficient local mapping and a hierarchical optimization back-end. Measurements of a 3D laser scanner are aggregated in local multiresolution maps by means of surfel-based registration. The local maps are used in a multi-level graph for allocentric mapping and localization. In order to incorporate corrections when refining the alignment, the individual 3D scans in the local map are modeled as a sub-graph and graph optimization is performed to account for drift and misalignments in the local maps. Furthermore, in each sub-graph, a continuous-time representation of the sensor trajectory allows to correct measurements between scan poses. We evaluate our approach in multiple experiments by showing qualitative results. Furthermore, we quantify the map quality by an entropy-based measure.Comment: In: Proceedings of the International Conference on Robotics and Automation (ICRA) 201

    Microdrone-Based Indoor Mapping with Graph SLAM

    Get PDF
    Unmanned aerial vehicles offer a safe and fast approach to the production of three-dimensional spatial data on the surrounding space. In this article, we present a low-cost SLAM-based drone for creating exploration maps of building interiors. The focus is on emergency response mapping in inaccessible or potentially dangerous places. For this purpose, we used a quadcopter microdrone equipped with six laser rangefinders (1D scanners) and an optical sensor for mapping and positioning. The employed SLAM is designed to map indoor spaces with planar structures through graph optimization. It performs loop-closure detection and correction to recognize previously visited places, and to correct the accumulated drift over time. The proposed methodology was validated for several indoor environments. We investigated the performance of our drone against a multilayer LiDAR-carrying macrodrone, a vision-aided navigation helmet, and ground truth obtained with a terrestrial laser scanner. The experimental results indicate that our SLAM system is capable of creating quality exploration maps of small indoor spaces, and handling the loop-closure problem. The accumulated drift without loop closure was on average 1.1% (0.35 m) over a 31-m-long acquisition trajectory. Moreover, the comparison results demonstrated that our flying microdrone provided a comparable performance to the multilayer LiDAR-based macrodrone, given the low deviation between the point clouds built by both drones. Approximately 85 % of the cloud-to-cloud distances were less than 10 cm

    Camera Marker Networks for Pose Estimation and Scene Understanding in Construction Automation and Robotics.

    Full text link
    The construction industry faces challenges that include high workplace injuries and fatalities, stagnant productivity, and skill shortage. Automation and Robotics in Construction (ARC) has been proposed in the literature as a potential solution that makes machinery easier to collaborate with, facilitates better decision-making, or enables autonomous behavior. However, there are two primary technical challenges in ARC: 1) unstructured and featureless environments; and 2) differences between the as-designed and the as-built. It is therefore impossible to directly replicate conventional automation methods adopted in industries such as manufacturing on construction sites. In particular, two fundamental problems, pose estimation and scene understanding, must be addressed to realize the full potential of ARC. This dissertation proposes a pose estimation and scene understanding framework that addresses the identified research gaps by exploiting cameras, markers, and planar structures to mitigate the identified technical challenges. A fast plane extraction algorithm is developed for efficient modeling and understanding of built environments. A marker registration algorithm is designed for robust, accurate, cost-efficient, and rapidly reconfigurable pose estimation in unstructured and featureless environments. Camera marker networks are then established for unified and systematic design, estimation, and uncertainty analysis in larger scale applications. The proposed algorithms' efficiency has been validated through comprehensive experiments. Specifically, the speed, accuracy and robustness of the fast plane extraction and the marker registration have been demonstrated to be superior to existing state-of-the-art algorithms. These algorithms have also been implemented in two groups of ARC applications to demonstrate the proposed framework's effectiveness, wherein the applications themselves have significant social and economic value. The first group is related to in-situ robotic machinery, including an autonomous manipulator for assembling digital architecture designs on construction sites to help improve productivity and quality; and an intelligent guidance and monitoring system for articulated machinery such as excavators to help improve safety. The second group emphasizes human-machine interaction to make ARC more effective, including a mobile Building Information Modeling and way-finding platform with discrete location recognition to increase indoor facility management efficiency; and a 3D scanning and modeling solution for rapid and cost-efficient dimension checking and concise as-built modeling.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113481/1/cforrest_1.pd

    Global Localization based on Evolutionary Optimization Algorithms for Indoor and Underground Environments

    Get PDF
    Mención Internacional en el título de doctorA fully autonomous robot is defined by its capability to sense, understand and move within the environment to perform a specific task. These qualities are included within the concept of navigation. However, among them, a basic transcendent one is localization, the capacity of the system to know its position regarding its surroundings. Therefore, the localization issue could be defined as searching the robot’s coordinates and rotation angles within a known environment. In this thesis, the particular case of Global Localization is addressed, when no information about the initial position is known, and the robot relies only on its sensors. This work aims to develop several tools that allow the system to locate in the two most usual geometric map representations: occupancy maps and Point Clouds. The former divides the dimensional space into equally-sized cells coded with a binary value distinguishing between free and occupied space. Point Clouds define obstacles and environment features as a sparse set of points in the space, commonly measured through a laser sensor. In this work, various algorithms are presented to search for that position through laser measurements only, in contrast with more usual methods that combine external information with motion information of the robot, odometry. Therefore, the system is capable of finding its own position in indoor environments, with no necessity of external positioning and without the influence of the uncertainty that motion sensors typically induce. Our solution is addressed by implementing various stochastic optimization algorithms or Meta-heuristics, specifically those bio-inspired or commonly known as Evolutionary Algorithms. Inspired by natural phenomena, these algorithms are based on the evolution of a series of particles or population members towards a solution through the optimization of a cost or fitness function that defines the problem. The implemented algorithms are Differential Evolution, Particle Swarm Optimization, and Invasive Weed Optimization, which try to mimic the behavior of evolution through mutation, the movement of swarms or flocks of animals, and the colonizing behavior of invasive species of plants respectively. The different implementations address the necessity to parameterize these algorithms for a wide search space as a complete three-dimensional map, with exploratory behavior and the convergence conditions that terminate the search. The process is a recursive optimum estimation search, so the solution is unknown. These implementations address the optimum localization search procedure by comparing the laser measurements from the real position with the one obtained from each candidate particle in the known map. The cost function evaluates this similarity between real and estimated measurements and, therefore, is the function that defines the problem to optimize. The common approach in localization or mapping using laser sensors is to establish the mean square error or the absolute error between laser measurements as an optimization function. In this work, a different perspective is introduced by benefiting from statistical distance or divergences, utilized to describe the similarity between probability distributions. By modeling the laser sensor as a probability distribution over the measured distance, the algorithm can benefit from the asymmetries provided by these divergences to favor or penalize different situations. Hence, how the laser scans differ and not only how much can be evaluated. The results obtained in different maps, simulated and real, prove that the Global Localization issue is successfully solved through these methods, both in position and orientation. The implementation of divergence-based weighted cost functions provides great robustness and accuracy to the localization filters and optimal response before different sources and noise levels from sensor measurements, the environment, or the presence of obstacles that are not registered in the map.Lo que define a un robot completamente autónomo es su capacidad para percibir el entorno, comprenderlo y poder desplazarse en ´el para realizar las tareas encomendadas. Estas cualidades se engloban dentro del concepto de la navegación, pero entre todas ellas la más básica y de la que dependen en buena parte el resto es la localización, la capacidad del sistema de conocer su posición respecto al entorno que lo rodea. De esta forma el problema de la localización se podría definir como la búsqueda de las coordenadas de posición y los ángulos de orientación de un robot móvil dentro de un entorno conocido. En esta tesis se aborda el caso particular de la localización global, cuando no existe información inicial alguna y el sistema depende únicamente de sus sensores. El objetivo de este trabajo es el desarrollo de varias herramientas que permitan que el sistema encuentre la localización en la que se encuentra respecto a los dos tipos de mapa más comúnmente utilizados para representar el entorno: los mapas de ocupación y las nubes de puntos. Los primeros subdividen el espacio en celdas de igual tamaño cuyo valor se define de forma binaria entre espacio libre y ocupado. Las nubes de puntos definen los obstáculos como una serie dispersa de puntos en el espacio comúnmente medidos a través de un láser. En este trabajo se presentan varios algoritmos para la búsqueda de esa posición utilizando únicamente las medidas de este sensor láser, en contraste con los métodos más habituales que combinan información externa con información propia del movimiento del robot, la odometría. De esta forma el sistema es capaz de encontrar su posición en entornos interiores sin depender de posicionamiento externo y sin verse influenciado por la deriva típica que inducen los sensores de movimiento. La solución se afronta mediante la implementación de varios tipos de algoritmos estocásticos de optimización o Meta-heurísticas, en concreto entre los denominados bio-inspirados o comúnmente conocidos como Algoritmos Evolutivos. Estos algoritmos, inspirados en varios fenómenos de la naturaleza, se basan en la evolución de una serie de partículas o población hacia una solución en base a la optimización de una función de coste que define el problema. Los algoritmos implementados en este trabajo son Differential Evolution, Particle Swarm Optimization e Invasive Weed Optimization, que tratan de imitar el comportamiento de la evolución por mutación, el movimiento de enjambres o bandas de animales y la colonización por parte de especies invasivas de plantas respectivamente. Las distintas implementaciones abordan la necesidad de parametrizar estos algoritmos para un espacio de búsqueda muy amplio como es un mapa completo, con la necesidad de que su comportamiento sea muy exploratorio, así como las condiciones de convergencia que definen el fin de la búsqueda ya que al ser un proceso recursivo de estimación la solución no es conocida. Estos algoritmos plantean la forma de buscar la localización ´optima del robot mediante la comparación de las medidas del láser en la posición real con lo esperado en la posición de cada una de esas partículas teniendo en cuenta el mapa conocido. La función de coste evalúa esa semejanza entre las medidas reales y estimadas y por tanto, es la función que define el problema. Las funciones típicamente utilizadas tanto en mapeado como localización mediante el uso de sensores láser de distancia son el error cuadrático medio o el error absoluto entre distancia estimada y real. En este trabajo se presenta una perspectiva diferente, aprovechando las distancias estadísticas o divergencias, utilizadas para establecer la semejanza entre distribuciones probabilísticas. Modelando el sensor como una distribución de probabilidad entorno a la medida aportada por el láser, se puede aprovechar la asimetría de esas divergencias para favorecer o penalizar distintas situaciones. De esta forma se evalúa como difieren las medias y no solo cuanto. Los resultados obtenidos en distintos mapas tanto simulados como reales demuestran que el problema de la localización se resuelve con éxito mediante estos métodos tanto respecto al error de estimación de la posición como de la orientación del robot. El uso de las divergencias y su implementación en una función de coste ponderada proporciona gran robustez y precisión al filtro de localización y gran respuesta ante diferentes fuentes y niveles de ruido, tanto de la propia medida del sensor, del ambiente y de obstáculos no modelados en el mapa del entorno.Programa de Doctorado en Ingeniería Eléctrica, Electrónica y Automática por la Universidad Carlos III de MadridPresidente: Fabio Bonsignorio.- Secretario: María Dolores Blanco Rojas.- Vocal: Alberto Brunete Gonzále

    Unconventional Trajectories for Mobile 3D Scanning and Mapping

    Get PDF
    State-of-the-art LiDAR-based 3D scanning and mapping systems focus on scenarios where good sensing coverage is ensured, such as drones, wheeled robots, cars, or backpack-mounted systems. However, in some scenarios more unconventional sensor trajectories come naturally, e.g., rolling, descending, or oscillating back and forth, but the literature on these is relatively sparse. As a result, most implementations developed in the past are not able to solve the SLAM problem in such conditions. In this chapter, we propose a robust offline-batch SLAM system that is able to address more challenging trajectories, which are characterized by weak angles of incidence and limited FOV while scanning. The proposed SLAM system is an upgraded version of our previous work and takes as input the raw points and prior pose estimates, yet the latter are subject to large amounts of drift. Our approach is a two-staged algorithm where in the first stage coarse alignment is fast achieved by matching planar polygons. In the second stage, we utilize a graph-based SLAM algorithm for further refinement. We evaluate the mapping accuracy of the algorithm on our own recorded datasets using high-resolution ground truth maps, which are available from a TLS
    corecore