694 research outputs found

    Vision-based navigation for autonomous underwater vehicles

    Get PDF
    This thesis investigates the use of vision sensors in Autonomous Underwater Vehicle (AUV) navigation, which is typically performed using a combination of dead-reckoning and external acoustic positioning systems. Traditional dead-reckoning sensors such els Doppler Velocity Logs (DVLs) or inertial systems are expensive and result in drifting trajectory estimates. Acoustic positioning systems can be used to correct dead-reckoning drift, however they are time consuming to deploy and have a limited range of operation. Occlusion and multipath problems may also occur when a vehicle operates near the seafloor, particularly in environments such as reefs, ridges and canyons, which are the focus of many AUV applications. Vision-based navigation approaches have the potential to improve the availability and performance of AUVs in a wide range of applications. Visual odometry may replace expensive dead-reckoning sensors in small and low-cost vehicles. Using onboard cameras to correct dead-reckoning drift will allow AUVs to navigate accurately over long distances, without the limitations of acoustic positioning systems. This thesis contains three principal contributions. The first is an algorithm to estimate the trajectory of a vehicle by fusing observations from sonar and monocular vision sensors. The second is a stereo-vision motion estimation approach that can be used on its own to provide odometry estimation, or fused with additional sensors in a Simultaneous Localisation And Mapping (SLAM) framework. The third is an efficient SLAM algorithm that uses visual observations to correct drifting trajectory estimates. Results of this work are presented in simulation and using data collected during several deployments of underwater vehicles in coral reef environments. Trajectory estimation is demonstrated for short transects using the sonar and vision fusion and stereo-vision approaches. Navigation over several kilometres is demonstrated using the SLAM algorithm, where stereo-vision is shown to improve the estimated trajectory produced by a DVL

    Toward autonomous exploration in confined underwater environments

    Get PDF
    Author Posting. © The Author(s), 2015. This is the author's version of the work. It is posted here by permission of John Wiley & Sons for personal use, not for redistribution. The definitive version was published in Journal of Field Robotics 33 (2016): 994-1012, doi:10.1002/rob.21640.In this field note we detail the operations and discuss the results of an experiment conducted in the unstructured environment of an underwater cave complex, using an autonomous underwater vehicle (AUV). For this experiment the AUV was equipped with two acoustic sonar to simultaneously map the caves’ horizontal and vertical surfaces. Although the caves’ spatial complexity required AUV guidance by a diver, this field deployment successfully demonstrates a scan matching algorithm in a simultaneous localization and mapping (SLAM) framework that significantly reduces and bounds the localization error for fully autonomous navigation. These methods are generalizable for AUV exploration in confined underwater environments where surfacing or pre-deployment of localization equipment are not feasible and may provide a useful step toward AUV utilization as a response tool in confined underwater disaster areas.This research work was partially sponsored by the EU FP7-Projects: Tecniospring- Marie Curie (TECSPR13-1-0052), MORPH (FP7-ICT-2011-7-288704), Eurofleets2 (FP7-INF-2012-312762), and the National Science Foundation (OCE-0955674)

    Advances in Simultaneous Localization and Mapping in Confined Underwater Environments Using Sonar and Optical Imaging.

    Full text link
    This thesis reports on the incorporation of surface information into a probabilistic simultaneous localization and mapping (SLAM) framework used on an autonomous underwater vehicle (AUV) designed for underwater inspection. AUVs operating in cluttered underwater environments, such as ship hulls or dams, are commonly equipped with Doppler-based sensors, which---in addition to navigation---provide a sparse representation of the environment in the form of a three-dimensional (3D) point cloud. The goal of this thesis is to develop perceptual algorithms that take full advantage of these sparse observations for correcting navigational drift and building a model of the environment. In particular, we focus on three objectives. First, we introduce a novel representation of this 3D point cloud as collections of planar features arranged in a factor graph. This factor graph representation probabalistically infers the spatial arrangement of each planar segment and can effectively model smooth surfaces (such as a ship hull). Second, we show how this technique can produce 3D models that serve as input to our pipeline that produces the first-ever 3D photomosaics using a two-dimensional (2D) imaging sonar. Finally, we propose a model-assisted bundle adjustment (BA) framework that allows for robust registration between surfaces observed from a Doppler sensor and visual features detected from optical images. Throughout this thesis, we show methods that produce 3D photomosaics using a combination of triangular meshes (derived from our SLAM framework or given a-priori), optical images, and sonar images. Overall, the contributions of this thesis greatly increase the accuracy, reliability, and utility of in-water ship hull inspection with AUVs despite the challenges they face in underwater environments. We provide results using the Hovering Autonomous Underwater Vehicle (HAUV) for autonomous ship hull inspection, which serves as the primary testbed for the algorithms presented in this thesis. The sensor payload of the HAUV consists primarily of: a Doppler velocity log (DVL) for underwater navigation and ranging, monocular and stereo cameras, and---for some applications---an imaging sonar.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120750/1/paulozog_1.pd

    Large-area visually augmented navigation for autonomous underwater vehicles

    Get PDF
    Submitted to the Joint Program in Applied Ocean Science & Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution June 2005This thesis describes a vision-based, large-area, simultaneous localization and mapping (SLAM) algorithm that respects the low-overlap imagery constraints typical of autonomous underwater vehicles (AUVs) while exploiting the inertial sensor information that is routinely available on such platforms. We adopt a systems-level approach exploiting the complementary aspects of inertial sensing and visual perception from a calibrated pose-instrumented platform. This systems-level strategy yields a robust solution to underwater imaging that overcomes many of the unique challenges of a marine environment (e.g., unstructured terrain, low-overlap imagery, moving light source). Our large-area SLAM algorithm recursively incorporates relative-pose constraints using a view-based representation that exploits exact sparsity in the Gaussian canonical form. This sparsity allows for efficient O(n) update complexity in the number of images composing the view-based map by utilizing recent multilevel relaxation techniques. We show that our algorithmic formulation is inherently sparse unlike other feature-based canonical SLAM algorithms, which impose sparseness via pruning approximations. In particular, we investigate the sparsification methodology employed by sparse extended information filters (SEIFs) and offer new insight as to why, and how, its approximation can lead to inconsistencies in the estimated state errors. Lastly, we present a novel algorithm for efficiently extracting consistent marginal covariances useful for data association from the information matrix. In summary, this thesis advances the current state-of-the-art in underwater visual navigation by demonstrating end-to-end automatic processing of the largest visually navigated dataset to date using data collected from a survey of the RMS Titanic (path length over 3 km and 3100 m2 of mapped area). This accomplishment embodies the summed contributions of this thesis to several current SLAM research issues including scalability, 6 degree of freedom motion, unstructured environments, and visual perception.This work was funded in part by the CenSSIS ERC of the National Science Foundation under grant EEC-9986821, in part by the Woods Hole Oceanographic Institution through a grant from the Penzance Foundation, and in part by a NDSEG Fellowship awarded through the Department of Defense

    Active SLAM for autonomous underwater exploration

    Get PDF
    Exploration of a complex underwater environment without an a priori map is beyond the state of the art for autonomous underwater vehicles (AUVs). Despite several efforts regarding simultaneous localization and mapping (SLAM) and view planning, there is no exploration framework, tailored to underwater vehicles, that faces exploration combining mapping, active localization, and view planning in a unified way. We propose an exploration framework, based on an active SLAM strategy, that combines three main elements: a view planner, an iterative closest point algorithm (ICP)-based pose-graph SLAM algorithm, and an action selection mechanism that makes use of the joint map and state entropy reduction. To demonstrate the benefits of the active SLAM strategy, several tests were conducted with the Girona 500 AUV, both in simulation and in the real world. The article shows how the proposed framework makes it possible to plan exploratory trajectories that keep the vehicle’s uncertainty bounded; thus, creating more consistent maps.Peer ReviewedPostprint (published version

    Localization, Mapping and SLAM in Marine and Underwater Environments

    Get PDF
    The use of robots in marine and underwater applications is growing rapidly. These applications share the common requirement of modeling the environment and estimating the robots’ pose. Although there are several mapping, SLAM, target detection and localization methods, marine and underwater environments have several challenging characteristics, such as poor visibility, water currents, communication issues, sonar inaccuracies or unstructured environments, that have to be considered. The purpose of this Special Issue is to present the current research trends in the topics of underwater localization, mapping, SLAM, and target detection and localization. To this end, we have collected seven articles from leading researchers in the field, and present the different approaches and methods currently being investigated to improve the performance of underwater robots

    An Overview of AUV Algorithms Research and Testbed at the University of Michigan

    Full text link
    This paper provides a general overview of the autonomous underwater vehicle (AUV) research projects being pursued within the Perceptual Robotics Laboratory (PeRL) at the University of Michigan. Founded in 2007, PeRL's research thrust is centered around improving AUV autonomy via algorithmic advancements in sensor-driven perceptual feedback for environmentally-based real-time mapping, navigation, and control. In this paper we discuss our three major research areas of: (1) real-time visual simultaneous localization and mapping (SLAM); (2) cooperative multi-vehicle navigation; and (3) perception-driven control. Pursuant to these research objectives, PeRL has acquired and significantly modified two commercial off-the-shelf (COTS) Ocean-Server Technology, Inc. Iver2 AUV platforms to serve as a real-world engineering testbed for algorithm development and validation. Details of the design modification, and related research enabled by this integration effort, are discussed herein.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86058/1/reustice-15.pd

    Towards Autonomous Ship Hull Inspection using the Bluefin HAUV

    Get PDF
    URL is to paper listed on conference scheduleIn this paper we describe our effort to automate ship hull inspection for security applications. Our main contribution is a system that is capable of drift-free self-localization on a ship hull for extended periods of time. Maintaining accurate localization for the duration of a mission is important for navigation and for ensuring full coverage of the area to be inspected. We exclusively use onboard sensors including an imaging sonar to correct for drift in the vehicle’s navigation sensors. We present preliminary results from online experiments on a ship hull. We further describe ongoing work including adding capabilities for change detection by aligning vehicle trajectories of different missions based on a technique recently developed in our lab.United States. Office of Naval Research (grant N00014-06- 10043
    • …
    corecore