1,320 research outputs found
Omnidirectional underwater surveying and telepresence
Exploratory dives are traditionally the first step for marine scientists to
acquire information on a previously unknown area of scientific interest. Manned
submersibles have been the platform of choice for such exploration, as they allow
a high level of environmental perception by the scientist on-board, and the ability
to take informed decisions on what to explore next. However, manned submersibles
have extremely high operation costs and provide very limited bottom time. Remotely
operated vehicles (ROVs) can partially address these two issues, but have operational
and cost constraints that restrict their usage.
This paper discusses new capabilities to assist scientists operating lightweight hybrid
remotely operated vehicles (HROV) in exploratory missions of mapping and
surveying. The new capabilities, under development within the Spanish National
project OMNIUS, provide a new layer of autonomy for HROVs by exploring three key
concepts: Omni-directional optical sensing for collaborative immersive exploration,
Proximity safety awareness and Online mapping during mission time.Peer Reviewe
Advances in Simultaneous Localization and Mapping in Confined Underwater Environments Using Sonar and Optical Imaging.
This thesis reports on the incorporation of surface information into a probabilistic simultaneous localization and mapping (SLAM) framework used on an autonomous underwater vehicle (AUV) designed for underwater inspection. AUVs operating in cluttered underwater environments, such as ship hulls or dams, are commonly equipped with Doppler-based sensors, which---in addition to navigation---provide a sparse representation of the environment in the form of a three-dimensional (3D) point cloud. The goal of this thesis is to develop perceptual algorithms that take full advantage of these sparse observations for correcting navigational drift and building a model of the environment. In particular, we focus on three objectives. First, we introduce a novel representation of this 3D point cloud as collections of planar features arranged in a factor graph. This factor graph representation probabalistically infers the spatial arrangement of each planar segment and can effectively model smooth surfaces (such as a ship hull). Second, we show how this technique can produce 3D models that serve as input to our pipeline that produces the first-ever 3D photomosaics using a two-dimensional (2D) imaging sonar. Finally, we propose a model-assisted bundle adjustment (BA) framework that allows for robust registration between surfaces observed from a Doppler sensor and visual features detected from optical images. Throughout this thesis, we show methods that produce 3D photomosaics using a combination of triangular meshes (derived from our SLAM framework or given a-priori), optical images, and sonar images. Overall, the contributions of this thesis greatly increase the accuracy, reliability, and utility of in-water ship hull inspection with AUVs despite the challenges they face in underwater environments.
We provide results using the Hovering Autonomous Underwater Vehicle (HAUV) for autonomous ship hull inspection, which serves as the primary testbed for the algorithms presented in this thesis. The sensor payload of the HAUV consists primarily of: a Doppler velocity log (DVL) for underwater navigation and ranging, monocular and stereo cameras, and---for some applications---an imaging sonar.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120750/1/paulozog_1.pd
Underwater inspection using sonar-based volumetric submaps
We propose a submap-based technique for mapping of underwater structures with complex geometries. Our approach relies on the use of probabilistic volumetric techniques to create submaps from multibeam sonar scans, as these offer increased outlier robustness. Special attention is paid to the problem of denoising/enhancing sonar data. Pairwise submap alignment constraints are used in a factor graph framework to correct for navigation drift and improve map accuracy. We provide experimental results obtained from the inspection of the running gear and bulbous bow of a 600-foot, Wright-class supply ship.United States. Office of Naval Research (N00014-12-1-0093)United States. Office of Naval Research (N00014-14-1-0373
Towards Autonomous Ship Hull Inspection using the Bluefin HAUV
URL is to paper listed on conference scheduleIn this paper we describe our effort to automate ship hull inspection for security
applications. Our main contribution is a system that is capable of drift-free
self-localization on a ship hull for extended periods of time. Maintaining accurate
localization for the duration of a mission is important for navigation and for
ensuring full coverage of the area to be inspected. We exclusively use onboard
sensors including an imaging sonar to correct for drift in the vehicle’s navigation
sensors. We present preliminary results from online experiments on a ship hull. We
further describe ongoing work including adding capabilities for change detection
by aligning vehicle trajectories of different missions based on a technique recently
developed in our lab.United States. Office of Naval Research (grant N00014-06- 10043
Towards High-resolution Imaging from Underwater Vehicles
Large area mapping at high resolution underwater continues to be constrained by sensor-level environmental constraints and the mismatch between available navigation and sensor accuracy. In this paper, advances are presented that exploit aspects of the sensing modality, and consistency and redundancy within local sensor measurements to build high-resolution optical and acoustic maps that are a consistent representation of the environment. This work is presented in the context of real-world data acquired using autonomous underwater vehicles (AUVs) and remotely operated vehicles (ROVs) working in diverse applications including shallow water coral reef surveys with the Seabed AUV, a forensic survey of the RMS Titanic in the North Atlantic at a depth of 4100 m using the Hercules ROV, and a survey of the TAG hydrothermal vent area in the mid-Atlantic at a depth of 3600 m using the Jason II ROV. Specifically, the focus is on the related problems of structure from motion from underwater optical imagery assuming pose instrumented calibrated cameras. General wide baseline solutions are presented for these problems based on the extension of techniques from the simultaneous localization and mapping (SLAM), photogrammetric and the computer vision communities. It is also examined how such techniques can be extended for the very different sensing modality and scale associated with multi-beam bathymetric mapping. For both the optical and acoustic mapping cases it is also shown how the consistency in mapping can be used not only for better global mapping, but also to refine navigation estimates.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86051/1/hsingh-21.pd
Large Area 3D Reconstructions from Underwater Surveys
Robotic underwater vehicles can perform vast optical
surveys of the ocean floor. Scientists value these surveys since
optical images offer high levels of information and are easily
interpreted by humans. Unfortunately the coverage of a single
image is limited hy absorption and backscatter while what is
needed is an overall view of the survey area. Recent work on
underwater mosaics assume planar scenes and are applicable
only to Situations without much relief.
We present a complete and validated system for processing
optical images acquired from an underwater mbotic vehicle to
form a 3D reconstruction of the wean floor. Our approach is
designed for the most general conditions of wide-baseline imagery
(low overlap and presence of significant 3D structure) and scales
to hundreds of images. We only assume a calibrated camera
system and a vehicle with uncertain and possibly drifting pose
information (e.g. a compass, depth sensor and a Doppler velocity
Our approach is based on a combination of techniques from
computer vision, photogrammetry and mhotics. We use a local
to global approach to structure from motion, aided by the
navigation sensors on the vehicle to generate 3D suhmaps. These
suhmaps are then placed in a common reference frame that
is refined by matching overlapping submaps. The final stage of
processing is a bundle adjustment that provides the 3D structure,
camera poses and uncertainty estimates in a consistent reference
frame.
We present results with ground-truth for structure as well as
results from an oceanographic survey over a coral reef covering
an area of appmximately one hundred square meters.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86037/1/opizarro-33.pd
Large-area visually augmented navigation for autonomous underwater vehicles
Submitted to the Joint Program in Applied Ocean Science & Engineering
in partial fulfillment of the requirements for the degree of Doctor of Philosophy
at the Massachusetts Institute of Technology
and the Woods Hole Oceanographic Institution
June 2005This thesis describes a vision-based, large-area, simultaneous localization and mapping (SLAM) algorithm that respects the low-overlap imagery constraints typical of autonomous underwater vehicles (AUVs) while exploiting the inertial sensor information that is routinely available on such platforms. We adopt a systems-level approach exploiting the complementary aspects of inertial sensing and visual perception from a calibrated pose-instrumented platform. This systems-level strategy yields a robust solution to underwater imaging that
overcomes many of the unique challenges of a marine environment (e.g., unstructured terrain, low-overlap imagery, moving light source). Our large-area SLAM algorithm recursively incorporates relative-pose constraints using a view-based representation that exploits exact sparsity in the Gaussian canonical form. This sparsity allows for efficient O(n) update complexity in the number of images composing the view-based map by utilizing recent multilevel relaxation techniques. We show that our algorithmic formulation is inherently sparse unlike other feature-based canonical SLAM algorithms, which impose sparseness via pruning approximations. In particular, we investigate
the sparsification methodology employed by sparse extended information filters (SEIFs)
and offer new insight as to why, and how, its approximation can lead to inconsistencies in
the estimated state errors. Lastly, we present a novel algorithm for efficiently extracting consistent marginal covariances useful for data association from the information matrix. In summary, this thesis advances the current state-of-the-art in underwater visual navigation by demonstrating end-to-end automatic processing of the largest visually navigated dataset to date using data collected from a survey of the RMS Titanic (path length over 3 km and 3100 m2 of mapped area). This accomplishment embodies the summed contributions of this thesis to several current SLAM research issues including scalability, 6 degree of
freedom motion, unstructured environments, and visual perception.This work was funded in part by the CenSSIS ERC of the National Science Foundation
under grant EEC-9986821, in part by the Woods Hole Oceanographic Institution through a
grant from the Penzance Foundation, and in part by a NDSEG Fellowship awarded through
the Department of Defense
Mapping Complex Marine Environments with Autonomous Surface Craft
This paper presents a novel marine mapping system using an Autonomous
Surface Craft (ASC). The platform includes an extensive sensor suite for mapping
environments both above and below the water surface. A relatively small hull size
and shallow draft permits operation in cluttered and shallow environments. We address the Simultaneous Mapping and Localization (SLAM) problem for concurrent
mapping above and below the water in large scale marine environments. Our key
algorithmic contributions include: (1) methods to account for degradation of GPS
in close proximity to bridges or foliage canopies and (2) scalable systems for management of large volumes of sensor data to allow for consistent online mapping
under limited physical memory. Experimental results are presented to demonstrate
the approach for mapping selected structures along the Charles River in Boston.United States. Office of Naval Research (N00014-06-10043)United States. Office of Naval Research (N00014-05-10244)United States. Office of Naval Research (N00014-07-11102)Massachusetts Institute of Technology. Sea Grant College Program (grant 2007-R/RCM-20
- …