1,946 research outputs found

    Visual SLAM for flying vehicles

    Get PDF
    The ability to learn a map of the environment is important for numerous types of robotic vehicles. In this paper, we address the problem of learning a visual map of the ground using flying vehicles. We assume that the vehicles are equipped with one or two low-cost downlooking cameras in combination with an attitude sensor. Our approach is able to construct a visual map that can later on be used for navigation. Key advantages of our approach are that it is comparably easy to implement, can robustly deal with noisy camera images, and can operate either with a monocular camera or a stereo camera system. Our technique uses visual features and estimates the correspondences between features using a variant of the progressive sample consensus (PROSAC) algorithm. This allows our approach to extract spatial constraints between camera poses that can then be used to address the simultaneous localization and mapping (SLAM) problem by applying graph methods. Furthermore, we address the problem of efficiently identifying loop closures. We performed several experiments with flying vehicles that demonstrate that our method is able to construct maps of large outdoor and indoor environments. © 2008 IEEE

    Collapsible Cubes: Removing Overhangs from 3D Point Clouds to Build Local Navigable Elevation Maps

    Get PDF
    Elevation maps offer a compact 2 1/2 dimensional model of terrain surface for navigation in field mobile robotics. However, building these maps from 3D raw point clouds con- taining overhangs, such as tree canopy or tunnels, can produce useless results. This paper proposes a simple processing of a ground-based point cloud that identifies and removes overhang points that do not constitute an obstacle for navigation while keeping vertical structures such as walls or tree trunks. The procedure uses efficient data structures to collapse unsupported 3D cubes down to the ground. This method has been successfully applied to 3D laser scans taken from a mobile robot in outdoor environments in order to build local elevation maps for navigation. Computation times show an improvement with respect to a previous point-based solution to this problem.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    3D mapping and path planning from range data

    Get PDF
    This thesis reports research on mapping, terrain classification and path planning. These are classical problems in robotics, typically studied independently, and here we link such problems by framing them within a common proprioceptive modality, that of three-dimensional laser range scanning. The ultimate goal is to deliver navigation paths for challenging mobile robotics scenarios. For this reason we also deliver safe traversable regions from a previously computed globally consistent map. We first examine the problem of registering dense point clouds acquired at different instances in time. We contribute with a novel range registration mechanism for pairs of 3D range scans using point-to-point and point-to-line correspondences in a hierarchical correspondence search strategy. For the minimization we adopt a metric that takes into account not only the distance between corresponding points, but also the orientation of their relative reference frames. We also propose FaMSA, a fast technique for multi-scan point cloud alignment that takes advantage of the asserted point correspondences during sequential scan matching, using the point match history to speed up the computation of new scan matches. To properly propagate the model of the sensor noise and the scan matching, we employ first order error propagation, and to correct the error accumulation from local data alignment, we consider the probabilistic alignment of 3D point clouds using a delayed-state Extended Information Filter (EIF). In this thesis we adapt the Pose SLAM algorithm to the case of 3D range mapping, Pose SLAM is the variant of SLAM where only the robot trajectory is estimated and where sensor data is solely used to produce relative constraints between robot poses. These dense mapping techniques are tested in several scenarios acquired with our 3D sensors, producing impressively rich 3D environment models. The computed maps are then processed to identify traversable regions and to plan navigation sequences. In this thesis we present a pair of methods to attain high-level off-line classification of traversable areas, in which training data is acquired automatically from navigation sequences. Traversable features came from the robot footprint samples during manual robot motion, allowing us to capture terrain constrains not easy to model. Using only some of the traversed areas as positive training samples, our algorithms are tested in real scenarios to find the rest of the traversable terrain, and are compared with a naive parametric and some variants of the Support Vector Machine. Later, we contribute with a path planner that guarantees reachability at a desired robot pose with significantly lower computation time than competing alternatives. To search for the best path, our planner incrementally builds a tree using the A* algorithm, it includes a hybrid cost policy to efficiently expand the search tree, combining random sampling from the continuous space of kinematically feasible motion commands with a cost to goal metric that also takes into account the vehicle nonholonomic constraints. The planer also allows for node rewiring, and to speed up node search, our method includes heuristics that penalize node expansion near obstacles, and that limit the number of explored nodes. The method book-keeps visited cells in the configuration space, and disallows node expansion at those configurations in the first full iteration of the algorithm. We validate the proposed methods with experiments in extensive real scenarios from different very complex 3D outdoors environments, and compare it with other techniques such as the A*, RRT and RRT* algorithms.Esta tesis reporta investigación sobre el mapeo, clasificación de terreno y planificación de trayectorias. Estos son problemas clásicos en robótica los cuales generalmente se estudian de forma independiente, aquí se vinculan enmarcandolos con una modalidad propioceptiva común: un láser de rango 3D. El objetivo final es ofrecer trayectorias de navegación para escenarios complejos en el marco de la robótica móvil. Por esta razón también entregamos regiones transitables en un mapa global consistente calculado previamente. Primero examinamos el problema de registro de nubes de puntos adquiridas en diferentes instancias de tiempo. Contribuimos con un novedoso mecanismo de registro de pares de imagenes de rango 3D usando correspondencias punto a punto y punto a línea, en una estrategia de búsqueda de correspondencias jerárquica. Para la minimización optamos por una metrica que considera no sólo la distancia entre puntos, sino también la orientación de los marcos de referencia relativos. También proponemos FAMSA, una técnica para el registro rápido simultaneo de multiples nubes de puntos, la cual aprovecha las correspondencias de puntos obtenidas durante el registro secuencial, usando inicialmente la historia de correspondencias para acelerar el cálculo de las correspondecias en los nuevos registros de imagenes. Para propagar adecuadamente el modelo del ruido del sensor y del registro de imagenes, empleamos la propagación de error de primer orden, y para corregir el error acumulado del registro local, consideramos la alineación probabilística de nubes de puntos 3D utilizando un Filtro Extendido de Información de estados retrasados. En esta tesis adaptamos el algóritmo Pose SLAM para el caso de mapas con imagenes de rango 3D, Pose SLAM es la variante de SLAM donde solamente se estima la trayectoria del robot, usando los datos del sensor como restricciones relativas entre las poses robot. Estas técnicas de mapeo se prueban en varios escenarios adquiridos con nuestros sensores 3D produciendo modelos 3D impresionantes. Los mapas obtenidos se procesan para identificar regiones navegables y para planificar secuencias de navegación. Presentamos un par de métodos para lograr la clasificación de zonas transitables fuera de línea. Los datos de entrenamiento se adquieren de forma automática usando secuencias de navegación obtenidas manualmente. Las características transitables se captan de las huella de la trayectoria del robot, lo cual permite capturar restricciones del terreno difíciles de modelar. Con sólo algunas de las zonas transitables como muestras de entrenamiento positivo, nuestros algoritmos se prueban en escenarios reales para encontrar el resto del terreno transitable. Los algoritmos se comparan con algunas variantes de la máquina de soporte de vectores (SVM) y una parametrizacion ingenua. También, contribuimos con un planificador de trayectorias que garantiza llegar a una posicion deseada del robot en significante menor tiempo de cálculo a otras alternativas. Para buscar el mejor camino, nuestro planificador emplea un arbol de busqueda incremental basado en el algoritmo A*. Incluimos una póliza de coste híbrido para crecer de manera eficiente el árbol, combinando el muestro aleatorio del espacio continuo de comandos cinemáticos del robot con una métrica de coste al objetivo que también concidera las cinemática del robot. El planificador además permite reconectado de nodos, y, para acelerar la búsqueda de nodos, se incluye una heurística que penaliza la expansión de nodos cerca de los obstáculos, que limita el número de nodos explorados. El método conoce las céldas que ha visitado del espacio de configuraciones, evitando la expansión de nodos en configuraciones que han sido vistadas en la primera iteración completa del algoritmo. Los métodos propuestos se validán con amplios experimentos con escenarios reales en diferentes entornos exteriores, asi como su comparación con otras técnicas como los algoritmos A*, RRT y RRT*.Postprint (published version

    LiDAR-Based Place Recognition For Autonomous Driving: A Survey

    Full text link
    LiDAR-based place recognition (LPR) plays a pivotal role in autonomous driving, which assists Simultaneous Localization and Mapping (SLAM) systems in reducing accumulated errors and achieving reliable localization. However, existing reviews predominantly concentrate on visual place recognition (VPR) methods. Despite the recent remarkable progress in LPR, to the best of our knowledge, there is no dedicated systematic review in this area. This paper bridges the gap by providing a comprehensive review of place recognition methods employing LiDAR sensors, thus facilitating and encouraging further research. We commence by delving into the problem formulation of place recognition, exploring existing challenges, and describing relations to previous surveys. Subsequently, we conduct an in-depth review of related research, which offers detailed classifications, strengths and weaknesses, and architectures. Finally, we summarize existing datasets, commonly used evaluation metrics, and comprehensive evaluation results from various methods on public datasets. This paper can serve as a valuable tutorial for newcomers entering the field of place recognition and for researchers interested in long-term robot localization. We pledge to maintain an up-to-date project on our website https://github.com/ShiPC-AI/LPR-Survey.Comment: 26 pages,13 figures, 5 table

    Fusion of aerial images and sensor data from a ground vehicle for improved semantic mapping

    Get PDF
    This work investigates the use of semantic information to link ground level occupancy maps and aerial images. A ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omnidirectional camera, GPS and a laser range finder. This semantic information is used for local and global segmentation of an aerial image. The result is a map where the semantic information has been extended beyond the range of the robot sensors and predicts where the mobile robot can find buildings and potentially driveable ground

    A DISTRIBUTED ONLINE 3D-LIDAR MAPPING SYSTEM

    Get PDF

    Adaptive Learning Terrain Estimation for Unmanned Aerial Vehicle Applications

    Get PDF
    For the past decade, terrain mapping research has focused on ground robots using occupancy grids and tree-like data structures, like Octomap and Quadtrees. Since flight vehicles have different constraints, ground-based terrain mapping research may not be directly applicable to the aerospace industry. To address this issue, Adaptive Learning Terrain Estimation algorithms have been developed with an aim towards aerospace applications. This thesis develops and tests Adaptive Learning Terrain Estimation algorithms using a custom test benchmark on representative aerospace cases: autonomous UAV landing and UAV flight through 3D urban environments. The fundamental objective of this thesis is to investigate the use of Adaptive Learning Terrain Estimation algorithms for aerospace applications and compare their performance to commonly used mapping techniques such as Quadtree and Octomap. To test the algorithms, point clouds were collected and registered in simulation and real environments. Then, the Adaptive Learning, Quadtree, and Octomap algorithms were applied to the data sets, both in real-time and offline. Finally, metrics of map size, accuracy, and running time were developed and implemented to quantify and compare the performance of the algorithms. The results show that Quadtree yields the computationally lightest maps, but it is not suitable for real-time implementation due to its lack of recursiveness. Adaptive Learning maps are computationally efficient due to the use of multiresolution grids. Octomap yields the most detailed maps, but it produces a high computational load. The results of the research show that Adaptive Learning algorithms have significant potential for real-time implementation in aerospace applications. Their low memory load and variable-sized grids make them viable candidates for future research and development
    corecore