72 research outputs found

    Underwater inspection using sonar-based volumetric submaps

    Get PDF
    We propose a submap-based technique for mapping of underwater structures with complex geometries. Our approach relies on the use of probabilistic volumetric techniques to create submaps from multibeam sonar scans, as these offer increased outlier robustness. Special attention is paid to the problem of denoising/enhancing sonar data. Pairwise submap alignment constraints are used in a factor graph framework to correct for navigation drift and improve map accuracy. We provide experimental results obtained from the inspection of the running gear and bulbous bow of a 600-foot, Wright-class supply ship.United States. Office of Naval Research (N00014-12-1-0093)United States. Office of Naval Research (N00014-14-1-0373

    Search and Rescue under the Forest Canopy using Multiple UAVs

    Full text link
    We present a multi-robot system for GPS-denied search and rescue under the forest canopy. Forests are particularly challenging environments for collaborative exploration and mapping, in large part due to the existence of severe perceptual aliasing which hinders reliable loop closure detection for mutual localization and map fusion. Our proposed system features unmanned aerial vehicles (UAVs) that perform onboard sensing, estimation, and planning. When communication is available, each UAV transmits compressed tree-based submaps to a central ground station for collaborative simultaneous localization and mapping (CSLAM). To overcome high measurement noise and perceptual aliasing, we use the local configuration of a group of trees as a distinctive feature for robust loop closure detection. Furthermore, we propose a novel procedure based on cycle consistent multiway matching to recover from incorrect pairwise data associations. The returned global data association is guaranteed to be cycle consistent, and is shown to improve both precision and recall compared to the input pairwise associations. The proposed multi-UAV system is validated both in simulation and during real-world collaborative exploration missions at NASA Langley Research Center.Comment: IJRR revisio

    Submap Matching for Stereo-Vision Based Indoor/Outdoor SLAM

    Get PDF
    Autonomous robots operating in semi- or unstructured environments, e.g. during search and rescue missions, require methods for online on-board creation of maps to support path planning and obstacle avoidance. Perception based on stereo cameras is well suited for mixed indoor/outdoor environments. The creation of full 3D maps in GPS-denied areas however is still a challenging task for current robot systems, in particular due to depth errors resulting from stereo reconstruction. State-of-the-art 6D SLAM approaches employ graph-based optimization on the relative transformations between keyframes or local submaps. To achieve loop closures, correct data association is crucial, in particular for sensor input received at different points in time. In order to approach this challenge, we propose a novel method for submap matching. It is based on robust keypoints, which we derive from local obstacle classification. By describing geometrical 3D features, we achieve invariance to changing viewpoints and varying light conditions. We performed experiments in indoor, outdoor and mixed environments. In all three scenarios we achieved a final 3D position error of less than 0.23% of the full trajectory. In addition, we compared our approach with a 3D RBPF SLAM from previous work, achieving an improvement of at least 27% in mean 2D localization accuracy in different scenarios

    Present and Future of SLAM in Extreme Underground Environments

    Full text link
    This paper reports on the state of the art in underground SLAM by discussing different SLAM strategies and results across six teams that participated in the three-year-long SubT competition. In particular, the paper has four main goals. First, we review the algorithms, architectures, and systems adopted by the teams; particular emphasis is put on lidar-centric SLAM solutions (the go-to approach for virtually all teams in the competition), heterogeneous multi-robot operation (including both aerial and ground robots), and real-world underground operation (from the presence of obscurants to the need to handle tight computational constraints). We do not shy away from discussing the dirty details behind the different SubT SLAM systems, which are often omitted from technical papers. Second, we discuss the maturity of the field by highlighting what is possible with the current SLAM systems and what we believe is within reach with some good systems engineering. Third, we outline what we believe are fundamental open problems, that are likely to require further research to break through. Finally, we provide a list of open-source SLAM implementations and datasets that have been produced during the SubT challenge and related efforts, and constitute a useful resource for researchers and practitioners.Comment: 21 pages including references. This survey paper is submitted to IEEE Transactions on Robotics for pre-approva

    Efficient and elastic LiDAR reconstruction for large-scale exploration tasks

    Get PDF
    High-quality reconstructions and understanding the environment are essential for robotic tasks such as localisation, navigation and exploration. Applications like planners and controllers can make decisions based on them. International competitions such as the DARPA Subterranean Challenge demonstrate the difficulties that reconstruction methods must address in the real world, e.g. complex surfaces in unstructured environments, accumulation of localisation errors in long-term explorations, and the necessity for methods to be scalable and efficient in large-scale scenarios. Guided by these motivations, this thesis presents a multi-resolution volumetric reconstruction system, supereight-Atlas (SE-Atlas). SE-Atlas efficiently integrates long-range LiDAR scans with high resolution, incorporates motion undistortion, and employs an Atlas of submaps to produce an elastic 3D reconstruction. These features address limitations of conventional reconstruction techniques that were revealed in real-world experiments of an initial active perceptual planning prototype. Our experiments with SE-Atlas show that it can integrate LiDAR scans at 60m range with ∼5 cm resolution at ∼3 Hz, outperforming state-of-the-art methods in integration speed and memory efficiency. Reconstruction accuracy evaluation also proves that SE-Atlas can correct the map upon SLAM loop closure corrections, maintaining global consistency. We further propose four principled strategies for spawning and fusing submaps. Based on spatial analysis, SE-Atlas spawns new submaps when the robot transitions into an isolated space, and fuses submaps of the same space together. We focused on developing a system which scales against environment size instead of exploration length. A new formulation is proposed to compute relative uncertainties between poses in a SLAM pose graph, improving submap fusion reliability. Our experiments show that the average error in a large-scale map is approximately 5 cm. A further contribution was incorporating semantic information into SE-Atlas. A recursive Bayesian filter is used to maintain consistency in per-voxel semantic labels. Semantics is leveraged to detect indoor-outdoor transitions and adjust reconstruction parameters online

    Topological local-metric framework for mobile robots navigation: a long term perspective

    Full text link
    © 2018, Springer Science+Business Media, LLC, part of Springer Nature. Long term mapping and localization are the primary components for mobile robots in real world application deployment, of which the crucial challenge is the robustness and stability. In this paper, we introduce a topological local-metric framework (TLF), aiming at dealing with environmental changes, erroneous measurements and achieving constant complexity. TLF organizes the sensor data collected by the robot in a topological graph, of which the geometry is only encoded in the edge, i.e. the relative poses between adjacent nodes, relaxing the global consistency to local consistency. Therefore the TLF is more robust to unavoidable erroneous measurements from sensor information matching since the error is constrained in the local. Based on TLF, as there is no global coordinate, we further propose the localization and navigation algorithms by switching across multiple local metric coordinates. Besides, a lifelong memorizing mechanism is presented to memorize the environmental changes in the TLF with constant complexity, as no global optimization is required. In experiments, the framework and algorithms are evaluated on 21-session data collected by stereo cameras, which are sensitive to illumination, and compared with the state-of-art global consistent framework. The results demonstrate that TLF can achieve similar localization accuracy with that from global consistent framework, but brings higher robustness with lower cost. The localization performance can also be improved from sessions because of the memorizing mechanism. Finally, equipped with TLF, the robot navigates itself in a 1 km session autonomously

    Toward autonomous underwater mapping in partially structured 3D environments

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Master of Science at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution February 2014Motivated by inspection of complex underwater environments, we have developed a system for multi-sensor SLAM utilizing both structured and unstructured environmental features. We present a system for deriving planar constraints from sonar data, and jointly optimizing the vehicle and plane positions as nodes in a factor graph. We also present a system for outlier rejection and smoothing of 3D sonar data, and for generating loop closure constraints based on the alignment of smoothed submaps. Our factor graph SLAM backend combines loop closure constraints from sonar data with detections of visual fiducial markers from camera imagery, and produces an online estimate of the full vehicle trajectory and landmark positions. We evaluate our technique on an inspection of a decomissioned aircraft carrier, as well as synthetic data and controlled indoor experiments, demonstrating improved trajectory estimates and reduced reprojection error in the final 3D map

    A Drift-Resilient and Degeneracy-Aware Loop Closure Detection Method for Localization and Mapping In Perceptually-Degraded Environments

    Get PDF
    Enabling fully autonomous robots capable of navigating and exploring unknown and complex environments has been at the core of robotics research for several decades. Mobile robots rely on a model of the environment for functions like manipulation, collision avoidance and path planning. In GPS-denied and unknown environments where a prior map of the environment is not available, robots need to rely on the onboard sensing to obtain locally accurate maps to operate in their local environment. A global map of an unknown environment can be constructed from fusion of local maps of temporally or spatially distributed mobile robots in the environment. Loop closure detection, the ability to assert that a robot has returned to a previously visited location, is crucial for consistent mapping as it reduces the drift caused by error accumulation in the estimated robot trajectory. Moreover, in multi-robot systems, loop closure detection enables finding the correspondences between the local maps obtained by individual robots and merging them into a consistent global map of the environment. In ambiguous and perceptually-degraded environments, robust detection of intra- and inter-robot loop closures is especially challenging. This is due to poor illumination or lack-thereof, self-similarity, and sparsity of distinctive perceptual landmarks and features sufficient for establishing global position. Overcoming these challenges enables a wide range of terrestrial and planetary applications, ranging from search and rescue, and disaster relief in hostile environments, to robotic exploration of lunar and Martian surfaces, caves and lava tubes that are of particular interest as they can provide potential habitats for future manned space missions. In this dissertation, methods and metrics are developed for resolving location ambiguities to significantly improve loop closures in perceptually-degraded environments with sparse or undifferentiated features. The first contribution of this dissertation is development of a degeneracy-aware SLAM front-end capable of determining the level of geometric degeneracy in an unknown environment based on computing the Hessian associated with the computed optimal transformation from lidar scan matching. Using this crucial capability, featureless areas that could lead to data association ambiguity and spurious loop closures are determined and excluded from the search for loop closures. This significantly improves the quality and accuracy of localization and mapping, because the search space for loop closures can be expanded as needed to account for drift while decreasing rather than increasing the probability of false loop closure detections. The second contribution of this dissertation is development of a drift-resilient loop closure detection method that relies on the 2D semantic and 3D geometric features extracted from lidar point cloud data to enable detection of loop closures with increased robustness and accuracy as compared to traditional geometric methods. The proposed method achieves higher performance by exploiting the spatial configuration of the local scenes embedded in 2D occupancy grid maps commonly used in robot navigation, to search for putative loop closures in a pre-matching step before using a geometric verification. The third contribution of this dissertation is an extensive evaluation and analysis of performance and comparison with the state-of-the-art methods in simulation and in real-world, including six challenging underground mines across the United States

    A pose pruning driven solution to pose feature GraphSLAM

    Full text link
    © 2015 Taylor & Francis and The Robotics Society of Japan. To build consistent feature-based map for the environment, GraphSLAM forms the graph using the collected information, with poses of robot and features being nodes while the odometry and observations being binary edges (edge links to two nodes). As the number of kept nodes grows unboundedly while robot moves, this method will become intractable for long-duration operation. In this paper, we propose a pose pruning-driven solution for pose feature Simultaneous localization and mapping by relating the size of graph to the size of map instead of the length of trajectory. It consists of two steps: (1) An online pose pruning algorithm that can select a pose to be pruned based on the contribution of the pose. Different from conventional methods considering the spatial distance between poses, the contribution is based on the feature observations of poses, taking mapping into consideration. (2) An edge generation algorithm that can build new consistent binary edges from -nary edge (edge links to nodes) induced by marginalizing the pruned pose. The type of new edges remains invariant (i.e. they are either odometry or pose to feature observations), so no extra change is required to be made on the GraphSLAM optimizer, making the proposed solution modular. In the experiment, we first employ this system on simulation data-sets to show how it works. Then the large-scale data-sets: DLR, Victoria Park, and CityTrees10000 are used to evaluate its performance
    corecore