129 research outputs found

    Submap Matching for Stereo-Vision Based Indoor/Outdoor SLAM

    Get PDF
    Autonomous robots operating in semi- or unstructured environments, e.g. during search and rescue missions, require methods for online on-board creation of maps to support path planning and obstacle avoidance. Perception based on stereo cameras is well suited for mixed indoor/outdoor environments. The creation of full 3D maps in GPS-denied areas however is still a challenging task for current robot systems, in particular due to depth errors resulting from stereo reconstruction. State-of-the-art 6D SLAM approaches employ graph-based optimization on the relative transformations between keyframes or local submaps. To achieve loop closures, correct data association is crucial, in particular for sensor input received at different points in time. In order to approach this challenge, we propose a novel method for submap matching. It is based on robust keypoints, which we derive from local obstacle classification. By describing geometrical 3D features, we achieve invariance to changing viewpoints and varying light conditions. We performed experiments in indoor, outdoor and mixed environments. In all three scenarios we achieved a final 3D position error of less than 0.23% of the full trajectory. In addition, we compared our approach with a 3D RBPF SLAM from previous work, achieving an improvement of at least 27% in mean 2D localization accuracy in different scenarios

    Evaluation of RGB-D SLAM in Large Indoor Environments

    Full text link
    Simultaneous localization and mapping (SLAM) is one of the key components of a control system that aims to ensure autonomous navigation of a mobile robot in unknown environments. In a variety of practical cases a robot might need to travel long distances in order to accomplish its mission. This requires long-term work of SLAM methods and building large maps. Consequently the computational burden (including high memory consumption for map storage) becomes a bottleneck. Indeed, state-of-the-art SLAM algorithms include specific techniques and optimizations to tackle this challenge, still their performance in long-term scenarios needs proper assessment. To this end, we perform an empirical evaluation of two widespread state-of-the-art RGB-D SLAM methods, suitable for long-term navigation, i.e. RTAB-Map and Voxgraph. We evaluate them in a large simulated indoor environment, consisting of corridors and halls, while varying the odometer noise for a more realistic setup. We provide both qualitative and quantitative analysis of both methods uncovering their strengths and weaknesses. We find that both methods build a high-quality map with low odometry noise but tend to fail with high odometry noise. Voxgraph has lower relative trajectory estimation error and memory consumption than RTAB-Map, while its absolute error is higher.Comment: This is a pre-print of the paper accepted to ICR 2022 conferenc

    A constant-time SLAM back-end in the continuum between global mapping and submapping: application to visual stereo SLAM

    Get PDF
    This work addresses the development and application of a novel approach, called sparser relative bundle adjustment (SRBA), which exploits the inherent flexibility of the relative bundle adjustment (RBA) framework to devise a continuum of strategies, ranging from RBA with linear graphs to classic bundle adjustment (BA) in global coordinates, where submapping with local maps emerges as a natural intermediate solution. This method leads to graphs that can be optimized in bounded time even at loop closures, regardless of the loop length. Furthermore, it is shown that the pattern in which relative coordinate variables are defined among keyframes has a significant impact on the graph optimization problem. By using the proposed scheme, optimization can be done more efficiently than in standard RBA, allowing the optimization of larger local maps for any given maximum computational cost. The main algorithms involved in the graph management, along with their complexity analyses, are presented to prove their bounded-time nature. One key advance of the present work is the demonstration that, under mild assumptions, the spanning trees for every single keyframe in the map can be incrementally built by a constant-time algorithm, even for arbitrary graph topologies. We validate our proposal within the scope of visual stereo simultaneous localization and mapping (SLAM) by developing a complete system that includes a front-end that seamlessly integrates several state-of-the-art computer vision techniques such as ORB features and bag-of-words, along with a decision scheme for keyframe insertion and a SRBA-based back-end that operates as graph optimizer. Finally, a set of experiments in both indoor and outdoor conditions is presented to test the capabilities of this approach. Open-source implementations of the SRBA back-end and the stereo front-end have been released online.Ministerio de Ciencia e Innovación (DPI 2011-25483, DPI 2014-55826-R). Fondo Europeo de Desarrollo Regional – FEDE

    Collaborative Localization and Mapping for Autonomous Planetary Exploration : Distributed Stereo Vision-Based 6D SLAM in GNSS-Denied Environments

    Get PDF
    Mobile robots are a crucial element of present and future scientific missions to explore the surfaces of foreign celestial bodies such as Moon and Mars. The deployment of teams of robots allows to improve efficiency and robustness in such challenging environments. As long communication round-trip times to Earth render the teleoperation of robotic systems inefficient to impossible, on-board autonomy is a key to success. The robots operate in Global Navigation Satellite System (GNSS)-denied environments and thus have to rely on space-suitable on-board sensors such as stereo camera systems. They need to be able to localize themselves online, to model their surroundings, as well as to share information about the environment and their position therein. These capabilities constitute the basis for the local autonomy of each system as well as for any coordinated joint action within the team, such as collaborative autonomous exploration. In this thesis, we present a novel approach for stereo vision-based on-board and online Simultaneous Localization and Mapping (SLAM) for multi-robot teams given the challenges imposed by planetary exploration missions. We combine distributed local and decentralized global estimation methods to get the best of both worlds: A local reference filter on each robot provides real-time local state estimates required for robot control and fast reactive behaviors. We designed a novel graph topology to incorporate these state estimates into an online incremental graph optimization to compute global pose and map estimates that serve as input to higher-level autonomy functions. In order to model the 3D geometry of the environment, we generate dense 3D point cloud and probabilistic voxel-grid maps from noisy stereo data. We distribute the computational load and reduce the required communication bandwidth between robots by locally aggregating high-bandwidth vision data into partial maps that are then exchanged between robots and composed into global models of the environment. We developed methods for intra- and inter-robot map matching to recognize previously visited locations in semi- and unstructured environments based on their estimated local geometry, which is mostly invariant to light conditions as well as different sensors and viewpoints in heterogeneous multi-robot teams. A decoupling of observable and unobservable states in the local filter allows us to introduce a novel optimization: Enforcing all submaps to be gravity-aligned, we can reduce the dimensionality of the map matching from 6D to 4D. In addition to map matches, the robots use visual fiducial markers to detect each other. In this context, we present a novel method for modeling the errors of the loop closure transformations that are estimated from these detections. We demonstrate the robustness of our methods by integrating them on a total of five different ground-based and aerial mobile robots that were deployed in a total of 31 real-world experiments for quantitative evaluations in semi- and unstructured indoor and outdoor settings. In addition, we validated our SLAM framework through several different demonstrations at four public events in Moon and Mars-like environments. These include, among others, autonomous multi-robot exploration tests at a Moon-analogue site on top of the volcano Mt. Etna, Italy, as well as the collaborative mapping of a Mars-like environment with a heterogeneous robotic team of flying and driving robots in more than 35 public demonstration runs

    Efficient and elastic LiDAR reconstruction for large-scale exploration tasks

    Get PDF
    High-quality reconstructions and understanding the environment are essential for robotic tasks such as localisation, navigation and exploration. Applications like planners and controllers can make decisions based on them. International competitions such as the DARPA Subterranean Challenge demonstrate the difficulties that reconstruction methods must address in the real world, e.g. complex surfaces in unstructured environments, accumulation of localisation errors in long-term explorations, and the necessity for methods to be scalable and efficient in large-scale scenarios. Guided by these motivations, this thesis presents a multi-resolution volumetric reconstruction system, supereight-Atlas (SE-Atlas). SE-Atlas efficiently integrates long-range LiDAR scans with high resolution, incorporates motion undistortion, and employs an Atlas of submaps to produce an elastic 3D reconstruction. These features address limitations of conventional reconstruction techniques that were revealed in real-world experiments of an initial active perceptual planning prototype. Our experiments with SE-Atlas show that it can integrate LiDAR scans at 60m range with ∼5 cm resolution at ∼3 Hz, outperforming state-of-the-art methods in integration speed and memory efficiency. Reconstruction accuracy evaluation also proves that SE-Atlas can correct the map upon SLAM loop closure corrections, maintaining global consistency. We further propose four principled strategies for spawning and fusing submaps. Based on spatial analysis, SE-Atlas spawns new submaps when the robot transitions into an isolated space, and fuses submaps of the same space together. We focused on developing a system which scales against environment size instead of exploration length. A new formulation is proposed to compute relative uncertainties between poses in a SLAM pose graph, improving submap fusion reliability. Our experiments show that the average error in a large-scale map is approximately 5 cm. A further contribution was incorporating semantic information into SE-Atlas. A recursive Bayesian filter is used to maintain consistency in per-voxel semantic labels. Semantics is leveraged to detect indoor-outdoor transitions and adjust reconstruction parameters online

    Search and Rescue under the Forest Canopy using Multiple UAVs

    Full text link
    We present a multi-robot system for GPS-denied search and rescue under the forest canopy. Forests are particularly challenging environments for collaborative exploration and mapping, in large part due to the existence of severe perceptual aliasing which hinders reliable loop closure detection for mutual localization and map fusion. Our proposed system features unmanned aerial vehicles (UAVs) that perform onboard sensing, estimation, and planning. When communication is available, each UAV transmits compressed tree-based submaps to a central ground station for collaborative simultaneous localization and mapping (CSLAM). To overcome high measurement noise and perceptual aliasing, we use the local configuration of a group of trees as a distinctive feature for robust loop closure detection. Furthermore, we propose a novel procedure based on cycle consistent multiway matching to recover from incorrect pairwise data associations. The returned global data association is guaranteed to be cycle consistent, and is shown to improve both precision and recall compared to the input pairwise associations. The proposed multi-UAV system is validated both in simulation and during real-world collaborative exploration missions at NASA Langley Research Center.Comment: IJRR revisio

    Sensor fusion for flexible human-portable building-scale mapping

    Get PDF
    This paper describes a system enabling rapid multi-floor indoor map building using a body-worn sensor system fusing information from RGB-D cameras, LIDAR, inertial, and barometric sensors. Our work is motivated by rapid response missions by emergency personnel, in which the capability for one or more people to rapidly map a complex indoor environment is essential for public safety. Human-portable mapping raises a number of challenges not encountered in typical robotic mapping applications including complex 6-DOF motion and the traversal of challenging trajectories including stairs or elevators. Our system achieves robust performance in these situations by exploiting state-of-the-art techniques for robust pose graph optimization and loop closure detection. It achieves real-time performance in indoor environments of moderate scale. Experimental results are demonstrated for human-portable mapping of several floors of a university building, demonstrating the system's ability to handle motion up and down stairs and to organize initially disconnected sets of submaps in a complex environment.Lincoln LaboratoryUnited States. Air Force (Contract FA8721-05-C-0002)United States. Office of Naval Research (Grant N00014-10-1-0936)United States. Office of Naval Research (Grant N00014-11-1-0688)United States. Office of Naval Research (Grant N00014-12-10020

    The LRU Rover for Autonomous Planetary Exploration and its Success in the SpaceBotCamp Challenge

    Get PDF
    The task of planetary exploration poses many challenges for a robot system, from weight and size constraints to sensors and actuators suitable for extraterrestrial environment conditions. As there is a significant communication delay to other planets, the efficient operation of a robot system requires a high level of autonomy. In this work, we present the Light Weight Rover Unit (LRU), a small and agile rover prototype that we designed for the challenges of planetary exploration. Its locomotion system with individually steered wheels allows for high maneuverability in rough terrain and the application of stereo cameras as its main sensor ensures the applicability to space missions. We implemented software components for self-localization in GPS-denied environments, environment mapping, object search and localization and for the autonomous pickup and assembly of objects with its arm. Additional high-level mission control components facilitate both autonomous behavior and remote monitoring of the system state over a delayed communication link. We successfully demonstrated the autonomous capabilities of our LRU at the SpaceBotCamp challenge, a national robotics contest with focus on autonomous planetary exploration. A robot had to autonomously explore a moon-like rough-terrain environment, locate and collect two objects and assemble them after transport to a third object - which the LRU did on its first try, in half of the time and fully autonomous
    corecore