5,096 research outputs found
An Underwater SLAM System using Sonar, Visual, Inertial, and Depth Sensor
This paper presents a novel tightly-coupled keyframe-based Simultaneous
Localization and Mapping (SLAM) system with loop-closing and relocalization
capabilities targeted for the underwater domain. Our previous work, SVIn,
augmented the state-of-the-art visual-inertial state estimation package OKVIS
to accommodate acoustic data from sonar in a non-linear optimization-based
framework. This paper addresses drift and loss of localization -- one of the
main problems affecting other packages in underwater domain -- by providing the
following main contributions: a robust initialization method to refine scale
using depth measurements, a fast preprocessing step to enhance the image
quality, and a real-time loop-closing and relocalization method using bag of
words (BoW). An additional contribution is the addition of depth measurements
from a pressure sensor to the tightly-coupled optimization formulation.
Experimental results on datasets collected with a custom-made underwater sensor
suite and an autonomous underwater vehicle from challenging underwater
environments with poor visibility demonstrate performance never achieved before
in terms of accuracy and robustness
A new 3-D modelling method to extract subtransect dimensions from underwater videos
Underwater video transects have become a common tool for quantitative analysis of the seafloor. However a major difficulty remains in the accurate determination of the area surveyed as underwater navigation can be unreliable and image scaling does not always compensate for distortions due to perspective and topography. Depending on the camera set-up and available instruments, different methods of surface measurement are applied, which make it difficult to compare data obtained by different vehicles. 3-D modelling of the seafloor based on 2-D video data and a reference scale can be used to compute subtransect dimensions. Focussing on the length of the subtransect, the data obtained from 3-D models created with the software PhotoModeler Scanner are compared with those determined from underwater acoustic positioning (ultra short baseline, USBL) and bottom tracking (Doppler velocity log, DVL). 3-D model building and scaling was successfully conducted on all three tested set-ups and the distortion of the reference scales due to substrate roughness was identified as the main source of imprecision. Acoustic positioning was generally inaccurate and bottom tracking unreliable on rough terrain. Subtransect lengths assessed with PhotoModeler were on average 20 % longer than those derived from acoustic positioning due to the higher spatial resolution and the inclusion of slope. On a high relief wall bottom tracking and 3-D modelling yielded similar results. At present, 3-D modelling is the most powerful, albeit the most time-consuming, method for accurate determination of video subtransect dimensions
Toward autonomous exploration in confined underwater environments
Author Posting. © The Author(s), 2015. This is the author's version of the work. It is posted here by permission of John Wiley & Sons for personal use, not for redistribution. The definitive version was published in Journal of Field Robotics 33 (2016): 994-1012, doi:10.1002/rob.21640.In this field note we detail the operations and discuss the results of an experiment conducted
in the unstructured environment of an underwater cave complex, using an autonomous underwater vehicle (AUV). For this experiment the AUV was equipped with two acoustic
sonar to simultaneously map the caves’ horizontal and vertical surfaces. Although the
caves’ spatial complexity required AUV guidance by a diver, this field deployment successfully demonstrates a scan matching algorithm in a simultaneous localization and mapping (SLAM) framework that significantly reduces and bounds the localization error for fully
autonomous navigation. These methods are generalizable for AUV exploration in confined
underwater environments where surfacing or pre-deployment of localization equipment are
not feasible and may provide a useful step toward AUV utilization as a response tool in
confined underwater disaster areas.This research work was partially sponsored by the EU FP7-Projects: Tecniospring-
Marie Curie (TECSPR13-1-0052), MORPH (FP7-ICT-2011-7-288704), Eurofleets2 (FP7-INF-2012-312762),
and the National Science Foundation (OCE-0955674)
Visually Augmented Navigation for Autonomous Underwater Vehicles
As autonomous underwater vehicles (AUVs) are becoming routinely used in an exploratory context for ocean science, the goal of visually augmented navigation (VAN) is to improve the near-seafloor navigation precision of such vehicles without imposing the burden of having to deploy additional infrastructure. This is in contrast to traditional acoustic long baseline navigation techniques, which require the deployment, calibration, and eventual recovery of a transponder network. To achieve this goal, VAN is formulated within a vision-based simultaneous localization and mapping (SLAM) framework that exploits the systems-level complementary aspects of a camera and strap-down sensor suite. The result is an environmentally based navigation technique robust to the peculiarities of low-overlap underwater imagery. The method employs a view-based representation where camera-derived relative-pose measurements provide spatial constraints, which enforce trajectory consistency and also serve as a mechanism for loop closure, allowing for error growth to be independent of time for revisited imagery. This article outlines the multisensor VAN framework and demonstrates it to have compelling advantages over a purely vision-only approach by: 1) improving the robustness of low-overlap underwater image registration; 2) setting the free gauge scale; and 3) allowing for a disconnected camera-constraint topology.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86054/1/reustice-16.pd
Large Area 3D Reconstructions from Underwater Surveys
Robotic underwater vehicles can perform vast optical
surveys of the ocean floor. Scientists value these surveys since
optical images offer high levels of information and are easily
interpreted by humans. Unfortunately the coverage of a single
image is limited hy absorption and backscatter while what is
needed is an overall view of the survey area. Recent work on
underwater mosaics assume planar scenes and are applicable
only to Situations without much relief.
We present a complete and validated system for processing
optical images acquired from an underwater mbotic vehicle to
form a 3D reconstruction of the wean floor. Our approach is
designed for the most general conditions of wide-baseline imagery
(low overlap and presence of significant 3D structure) and scales
to hundreds of images. We only assume a calibrated camera
system and a vehicle with uncertain and possibly drifting pose
information (e.g. a compass, depth sensor and a Doppler velocity
Our approach is based on a combination of techniques from
computer vision, photogrammetry and mhotics. We use a local
to global approach to structure from motion, aided by the
navigation sensors on the vehicle to generate 3D suhmaps. These
suhmaps are then placed in a common reference frame that
is refined by matching overlapping submaps. The final stage of
processing is a bundle adjustment that provides the 3D structure,
camera poses and uncertainty estimates in a consistent reference
frame.
We present results with ground-truth for structure as well as
results from an oceanographic survey over a coral reef covering
an area of appmximately one hundred square meters.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86037/1/opizarro-33.pd
- …