727 research outputs found

    Toward lifelong visual localization and mapping

    Get PDF
    Thesis (Ph.D.)--Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science; and the Woods Hole Oceanographic Institution), 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 171-181).Mobile robotic systems operating over long durations require algorithms that are robust and scale efficiently over time as sensor information is continually collected. For mobile robots one of the fundamental problems is navigation; which requires the robot to have a map of its environment, so it can plan its path and execute it. Having the robot use its perception sensors to do simultaneous localization and mapping (SLAM) is beneficial for a fully autonomous system. Extending the time horizon of operations poses problems to current SLAM algorithms, both in terms of robustness and temporal scalability. To address this problem we propose a reduced pose graph model that significantly reduces the complexity of the full pose graph model. Additionally we develop a SLAM system using two different sensor modalities: imaging sonars for underwater navigation and vision based SLAM for terrestrial applications. Underwater navigation is one application domain that benefits from SLAM, where access to a global positioning system (GPS) is not possible. In this thesis we present SLAM systems for two underwater applications. First, we describe our implementation of real-time imaging-sonar aided navigation applied to in-situ autonomous ship hull inspection using the hovering autonomous underwater vehicle (HAUV). In addition we present an architecture that enables the fusion of information from both a sonar and a camera system. The system is evaluated using data collected during experiments on SS Curtiss and USCGC Seneca. Second, we develop a feature-based navigation system supporting multi-session mapping, and provide an algorithm for re-localizing the vehicle between missions. In addition we present a method for managing the complexity of the estimation problem as new information is received. The system is demonstrated using data collected with a REMUS vehicle equipped with a BlueView forward-looking sonar. The model we use for mapping builds on the pose graph representation which has been shown to be an efficient and accurate approach to SLAM. One of the problems with the pose graph formulation is that the state space continuously grows as more information is acquired. To address this problem we propose the reduced pose graph (RPG) model which partitions the space to be mapped and uses the partitions to reduce the number of poses used for estimation. To evaluate our approach, we present results using an online binocular and RGB-Depth visual SLAM system that uses place recognition both for robustness and multi-session operation. Additionally, to enable large-scale indoor mapping, our system automatically detects elevator rides based on accelerometer data. We demonstrate long-term mapping using approximately nine hours of data collected in the MIT Stata Center over the course of six months. Ground truth, derived by aligning laser scans to existing floor plans, is used to evaluate the global accuracy of the system. Our results illustrate the capability of our visual SLAM system to map a large scale environment over an extended period of time.by Hordur Johannsson.Ph.D

    Toward autonomous exploration in confined underwater environments

    Get PDF
    Author Posting. © The Author(s), 2015. This is the author's version of the work. It is posted here by permission of John Wiley & Sons for personal use, not for redistribution. The definitive version was published in Journal of Field Robotics 33 (2016): 994-1012, doi:10.1002/rob.21640.In this field note we detail the operations and discuss the results of an experiment conducted in the unstructured environment of an underwater cave complex, using an autonomous underwater vehicle (AUV). For this experiment the AUV was equipped with two acoustic sonar to simultaneously map the caves’ horizontal and vertical surfaces. Although the caves’ spatial complexity required AUV guidance by a diver, this field deployment successfully demonstrates a scan matching algorithm in a simultaneous localization and mapping (SLAM) framework that significantly reduces and bounds the localization error for fully autonomous navigation. These methods are generalizable for AUV exploration in confined underwater environments where surfacing or pre-deployment of localization equipment are not feasible and may provide a useful step toward AUV utilization as a response tool in confined underwater disaster areas.This research work was partially sponsored by the EU FP7-Projects: Tecniospring- Marie Curie (TECSPR13-1-0052), MORPH (FP7-ICT-2011-7-288704), Eurofleets2 (FP7-INF-2012-312762), and the National Science Foundation (OCE-0955674)

    Advances in Simultaneous Localization and Mapping in Confined Underwater Environments Using Sonar and Optical Imaging.

    Full text link
    This thesis reports on the incorporation of surface information into a probabilistic simultaneous localization and mapping (SLAM) framework used on an autonomous underwater vehicle (AUV) designed for underwater inspection. AUVs operating in cluttered underwater environments, such as ship hulls or dams, are commonly equipped with Doppler-based sensors, which---in addition to navigation---provide a sparse representation of the environment in the form of a three-dimensional (3D) point cloud. The goal of this thesis is to develop perceptual algorithms that take full advantage of these sparse observations for correcting navigational drift and building a model of the environment. In particular, we focus on three objectives. First, we introduce a novel representation of this 3D point cloud as collections of planar features arranged in a factor graph. This factor graph representation probabalistically infers the spatial arrangement of each planar segment and can effectively model smooth surfaces (such as a ship hull). Second, we show how this technique can produce 3D models that serve as input to our pipeline that produces the first-ever 3D photomosaics using a two-dimensional (2D) imaging sonar. Finally, we propose a model-assisted bundle adjustment (BA) framework that allows for robust registration between surfaces observed from a Doppler sensor and visual features detected from optical images. Throughout this thesis, we show methods that produce 3D photomosaics using a combination of triangular meshes (derived from our SLAM framework or given a-priori), optical images, and sonar images. Overall, the contributions of this thesis greatly increase the accuracy, reliability, and utility of in-water ship hull inspection with AUVs despite the challenges they face in underwater environments. We provide results using the Hovering Autonomous Underwater Vehicle (HAUV) for autonomous ship hull inspection, which serves as the primary testbed for the algorithms presented in this thesis. The sensor payload of the HAUV consists primarily of: a Doppler velocity log (DVL) for underwater navigation and ranging, monocular and stereo cameras, and---for some applications---an imaging sonar.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120750/1/paulozog_1.pd

    SONIC: Sonar Image Correspondence using Pose Supervised Learning for Imaging Sonars

    Full text link
    In this paper, we address the challenging problem of data association for underwater SLAM through a novel method for sonar image correspondence using learned features. We introduce SONIC (SONar Image Correspondence), a pose-supervised network designed to yield robust feature correspondence capable of withstanding viewpoint variations. The inherent complexity of the underwater environment stems from the dynamic and frequently limited visibility conditions, restricting vision to a few meters of often featureless expanses. This makes camera-based systems suboptimal in most open water application scenarios. Consequently, multibeam imaging sonars emerge as the preferred choice for perception sensors. However, they too are not without their limitations. While imaging sonars offer superior long-range visibility compared to cameras, their measurements can appear different from varying viewpoints. This inherent variability presents formidable challenges in data association, particularly for feature-based methods. Our method demonstrates significantly better performance in generating correspondences for sonar images which will pave the way for more accurate loop closure constraints and sonar-based place recognition. Code as well as simulated and real-world datasets will be made public to facilitate further development in the field

    Sparse Bayesian information filters for localization and mapping

    Get PDF
    Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution February 2008This thesis formulates an estimation framework for Simultaneous Localization and Mapping (SLAM) that addresses the problem of scalability in large environments. We describe an estimation-theoretic algorithm that achieves significant gains in computational efficiency while maintaining consistent estimates for the vehicle pose and the map of the environment. We specifically address the feature-based SLAM problem in which the robot represents the environment as a collection of landmarks. The thesis takes a Bayesian approach whereby we maintain a joint posterior over the vehicle pose and feature states, conditioned upon measurement data. We model the distribution as Gaussian and parametrize the posterior in the canonical form, in terms of the information (inverse covariance) matrix. When sparse, this representation is amenable to computationally efficient Bayesian SLAM filtering. However, while a large majority of the elements within the normalized information matrix are very small in magnitude, it is fully populated nonetheless. Recent feature-based SLAM filters achieve the scalability benefits of a sparse parametrization by explicitly pruning these weak links in an effort to enforce sparsity. We analyze one such algorithm, the Sparse Extended Information Filter (SEIF), which has laid much of the groundwork concerning the computational benefits of the sparse canonical form. The thesis performs a detailed analysis of the process by which the SEIF approximates the sparsity of the information matrix and reveals key insights into the consequences of different sparsification strategies. We demonstrate that the SEIF yields a sparse approximation to the posterior that is inconsistent, suffering from exaggerated confidence estimates. This overconfidence has detrimental effects on important aspects of the SLAM process and affects the higher level goal of producing accurate maps for subsequent localization and path planning. This thesis proposes an alternative scalable filter that maintains sparsity while preserving the consistency of the distribution. We leverage insights into the natural structure of the feature-based canonical parametrization and derive a method that actively maintains an exactly sparse posterior. Our algorithm exploits the structure of the parametrization to achieve gains in efficiency, with a computational cost that scales linearly with the size of the map. Unlike similar techniques that sacrifice consistency for improved scalability, our algorithm performs inference over a posterior that is conservative relative to the nominal Gaussian distribution. Consequently, we preserve the consistency of the pose and map estimates and avoid the effects of an overconfident posterior. We demonstrate our filter alongside the SEIF and the standard EKF both in simulation as well as on two real-world datasets. While we maintain the computational advantages of an exactly sparse representation, the results show convincingly that our method yields conservative estimates for the robot pose and map that are nearly identical to those of the original Gaussian distribution as produced by the EKF, but at much less computational expense. The thesis concludes with an extension of our SLAM filter to a complex underwater environment. We describe a systems-level framework for localization and mapping relative to a ship hull with an Autonomous Underwater Vehicle (AUV) equipped with a forward-looking sonar. The approach utilizes our filter to fuse measurements of vehicle attitude and motion from onboard sensors with data from sonar images of the hull. We employ the system to perform three-dimensional, 6-DOF SLAM on a ship hull

    Toward AUV Survey Design for Optimal Coverage and Localization Using the Cramer Rao Lower Bound

    Full text link
    This paper discusses an approach to using the Cramer Rao Lower Bound (CRLB) as a trajectory design tool for autonomous underwater vehicle (AUV) visual navigation. We begin with a discussion of Fisher Information as a measure of the lower bound of uncertainty in a simultaneous localization and mapping (SLAM) pose-graph. Treating the AUV trajectory as an non-random parameter, the Fisher information is calculated from the CRLB derivation, and depends only upon path geometry and sensor noise. The effect of the trajectory design parameters are evaluated by calculating the CRLB with different parameter sets. Next, optimal survey parameters are selected to improve the overall coverage rate while maintaining an acceptable level of localization precision for a fixed number of pose samples. The utility of the CRLB as a design tool in pre-planning an AUV survey is demonstrated using a synthetic data set for a boustrophedon survey. In this demonstration, we compare the CRLB of the improved survey plan with that of an actual previous hull-inspection survey plan of the USS Saratoga. Survey optimality is evaluated by measuring the overall coverage area and CRLB localization precision for a fixed number of nodes in the graph. We also examine how to exploit prior knowledge of environmental feature distribution in the survey plan.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86049/1/akim-10.pd

    Advanced perception, navigation and planning for autonomous in-water ship hull inspection

    Get PDF
    Inspection of ship hulls and marine structures using autonomous underwater vehicles has emerged as a unique and challenging application of robotics. The problem poses rich questions in physical design and operation, perception and navigation, and planning, driven by difficulties arising from the acoustic environment, poor water quality and the highly complex structures to be inspected. In this paper, we develop and apply algorithms for the central navigation and planning problems on ship hulls. These divide into two classes, suitable for the open, forward parts of a typical monohull, and for the complex areas around the shafting, propellers and rudders. On the open hull, we have integrated acoustic and visual mapping processes to achieve closed-loop control relative to features such as weld-lines and biofouling. In the complex area, we implemented new large-scale planning routines so as to achieve full imaging coverage of all the structures, at a high resolution. We demonstrate our approaches in recent operations on naval ships.United States. Office of Naval Research (Grant N00014-06-10043)United States. Office of Naval Research (Grant N00014-07-1-0791

    Vision-based navigation for autonomous underwater vehicles

    Get PDF
    This thesis investigates the use of vision sensors in Autonomous Underwater Vehicle (AUV) navigation, which is typically performed using a combination of dead-reckoning and external acoustic positioning systems. Traditional dead-reckoning sensors such els Doppler Velocity Logs (DVLs) or inertial systems are expensive and result in drifting trajectory estimates. Acoustic positioning systems can be used to correct dead-reckoning drift, however they are time consuming to deploy and have a limited range of operation. Occlusion and multipath problems may also occur when a vehicle operates near the seafloor, particularly in environments such as reefs, ridges and canyons, which are the focus of many AUV applications. Vision-based navigation approaches have the potential to improve the availability and performance of AUVs in a wide range of applications. Visual odometry may replace expensive dead-reckoning sensors in small and low-cost vehicles. Using onboard cameras to correct dead-reckoning drift will allow AUVs to navigate accurately over long distances, without the limitations of acoustic positioning systems. This thesis contains three principal contributions. The first is an algorithm to estimate the trajectory of a vehicle by fusing observations from sonar and monocular vision sensors. The second is a stereo-vision motion estimation approach that can be used on its own to provide odometry estimation, or fused with additional sensors in a Simultaneous Localisation And Mapping (SLAM) framework. The third is an efficient SLAM algorithm that uses visual observations to correct drifting trajectory estimates. Results of this work are presented in simulation and using data collected during several deployments of underwater vehicles in coral reef environments. Trajectory estimation is demonstrated for short transects using the sonar and vision fusion and stereo-vision approaches. Navigation over several kilometres is demonstrated using the SLAM algorithm, where stereo-vision is shown to improve the estimated trajectory produced by a DVL
    • …
    corecore