86 research outputs found
Map building fusing acoustic and visual information using autonomous underwater vehicles
Author Posting. © The Author(s), 2012. This is the author's version of the work. It is posted here by permission of John Wiley & Sons for personal use, not for redistribution. The definitive version was published in Journal of Field Robotics 30 (2013): 763–783, doi:10.1002/rob.21473.We present a system for automatically building 3-D maps of underwater terrain fusing
visual data from a single camera with range data from multibeam sonar. The six-degree
of freedom location of the camera relative to the navigation frame is derived as part of the
mapping process, as are the attitude offsets of the multibeam head and the on-board velocity
sensor. The system uses pose graph optimization and the square root information smoothing
and mapping framework to simultaneously solve for the robot’s trajectory, the map, and
the camera location in the robot’s frame. Matched visual features are treated within the
pose graph as images of 3-D landmarks, while multibeam bathymetry submap matches are
used to impose relative pose constraints linking robot poses from distinct tracklines of the
dive trajectory. The navigation and mapping system presented works under a variety of
deployment scenarios, on robots with diverse sensor suites. Results of using the system to
map the structure and appearance of a section of coral reef are presented using data acquired
by the Seabed autonomous underwater vehicle.The work described herein was funded by the National Science Foundation Censsis ERC under grant number
EEC-9986821, and by the National Oceanic and Atmospheric Administration under grant number
NA090AR4320129
3D reconstruction and motion estimation using forward looking sonar
Autonomous Underwater Vehicles (AUVs) are increasingly used in different domains
including archaeology, oil and gas industry, coral reef monitoring, harbour’s security,
and mine countermeasure missions. As electromagnetic signals do not penetrate
underwater environment, GPS signals cannot be used for AUV navigation, and optical
cameras have very short range underwater which limits their use in most underwater
environments.
Motion estimation for AUVs is a critical requirement for successful vehicle recovery
and meaningful data collection. Classical inertial sensors, usually used for AUV motion
estimation, suffer from large drift error. On the other hand, accurate inertial sensors are
very expensive which limits their deployment to costly AUVs. Furthermore, acoustic
positioning systems (APS) used for AUV navigation require costly installation and
calibration. Moreover, they have poor performance in terms of the inferred resolution.
Underwater 3D imaging is another challenge in AUV industry as 3D information is
increasingly demanded to accomplish different AUV missions. Different systems have
been proposed for underwater 3D imaging, such as planar-array sonar and T-configured
3D sonar. While the former features good resolution in general, it is very expensive and
requires huge computational power, the later is cheaper implementation but requires
long time for full 3D scan even in short ranges.
In this thesis, we aim to tackle AUV motion estimation and underwater 3D imaging by
proposing relatively affordable methodologies and study different parameters affecting
their performance. We introduce a new motion estimation framework for AUVs which
relies on the successive acoustic images to infer AUV ego-motion. Also, we propose an
Acoustic Stereo Imaging (ASI) system for underwater 3D reconstruction based on
forward looking sonars; the proposed system features cheaper implementation than
planar array sonars and solves the delay problem in T configured 3D sonars
Large Area 3D Reconstructions from Underwater Surveys
Robotic underwater vehicles can perform vast optical
surveys of the ocean floor. Scientists value these surveys since
optical images offer high levels of information and are easily
interpreted by humans. Unfortunately the coverage of a single
image is limited hy absorption and backscatter while what is
needed is an overall view of the survey area. Recent work on
underwater mosaics assume planar scenes and are applicable
only to Situations without much relief.
We present a complete and validated system for processing
optical images acquired from an underwater mbotic vehicle to
form a 3D reconstruction of the wean floor. Our approach is
designed for the most general conditions of wide-baseline imagery
(low overlap and presence of significant 3D structure) and scales
to hundreds of images. We only assume a calibrated camera
system and a vehicle with uncertain and possibly drifting pose
information (e.g. a compass, depth sensor and a Doppler velocity
Our approach is based on a combination of techniques from
computer vision, photogrammetry and mhotics. We use a local
to global approach to structure from motion, aided by the
navigation sensors on the vehicle to generate 3D suhmaps. These
suhmaps are then placed in a common reference frame that
is refined by matching overlapping submaps. The final stage of
processing is a bundle adjustment that provides the 3D structure,
camera poses and uncertainty estimates in a consistent reference
frame.
We present results with ground-truth for structure as well as
results from an oceanographic survey over a coral reef covering
an area of appmximately one hundred square meters.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/86037/1/opizarro-33.pd
The Geometry and Usage of the Supplementary Fisheye Lenses in Smartphones
Nowadays, mobile phones are more than a device that can only satisfy the communication need between people. Since fisheye lenses integrated with mobile phones are lightweight and easy to use, they are advantageous. In addition to this advantage, it is experimented whether fisheye lens and mobile phone combination can be used in a photogrammetric way, and if so, what will be the result. Fisheye lens equipment used with mobile phones was tested in this study. For this, standard calibration of ‘Olloclip 3 in one’ fisheye lens used with iPhone 4S mobile phone and ‘Nikon FC‐E9’ fisheye lens used with Nikon Coolpix8700 are compared based on equidistant model. This experimental study shows that Olloclip 3 in one fisheye lens developed for mobile phones has at least the similar characteristics with classic fisheye lenses. The dimensions of fisheye lenses used with smart phones are getting smaller and the prices are reducing. Moreover, as verified in this study, the accuracy of fisheye lenses used in smartphones is better than conventional fisheye lenses. The use of smartphones with fisheye lenses will give the possibility of practical applications to ordinary users in the near future
Selective visual odometry for accurate AUV localization
In this paper we present a stereo visual odometry system developed for autonomous underwater vehicle localization tasks. The main idea is to make use of only highly reliable data in the estimation process, employing a robust keypoint tracking approach and an effective keyframe selection strategy, so that camera movements are estimated with high accuracy even for long paths. Furthermore, in order to limit the drift error, camera pose estimation is referred to the last keyframe, selected by analyzing the feature temporal flow. The proposed system was tested on the KITTI evaluation framework and on the New Tsukuba stereo dataset to assess its effectiveness on long tracks and different illumination conditions. Results of a live archaeological campaign in the Mediterranean Sea, on an AUV equipped with a stereo camera pair, show that our solution can effectively work in underwater environments
Underwater 3D Reconstruction Based on Physical Models for Refraction and Underwater Light Propagation
In recent years, underwater imaging has gained a lot of popularity partly due to the availability of off-the-shelf consumer cameras, but also due to a growing interest in the ocean floor by science and industry. Apart from capturing single images or sequences, the application of methods from the area of computer vision has gained interest as well. However, water affects image formation in two major ways. First, while traveling through the water, light is attenuated and scattered, depending on the light's wavelength causing the typical strong green or blue hue in underwater images. Second, cameras used in underwater scenarios need to be confined in an underwater housing, viewing the scene through a flat or dome-shaped glass port. The inside of the housing is filled with air. Consequently, the light entering the housing needs to pass a water-glass interface, then a glass-air interface, thus is refracted twice, affecting underwater image formation geometrically. In classic Structure-from-Motion (SfM) approaches, the perspective camera model is usually assumed, however, it can be shown that it becomes invalid due to refraction in underwater scenarios. Therefore, this thesis proposes an adaptation of the SfM algorithm to underwater image formation with flat port underwater housings, i.e. introduces a method where refraction at the underwater housing is modeled explicitly. This includes a calibration approach, algorithms for relative and absolute pose estimation, an efficient, non-linear error function that is utilized in bundle adjustment, and a refractive plane sweep algorithm. Finally, if calibration data for an underwater light propagation model exists, the dense depth maps can be used to correct texture colors. Experiments with a perspective and the proposed refractive approach to 3D reconstruction revealed that the perspective approach does indeed suffer from a systematic model error depending on the distance between camera and glass and a possible tilt of the glass with respect to the image sensor. The proposed method shows no such systematic error and thus provides more accurate results for underwater image sequences
- …