52 research outputs found

    Low-Cost Vision Based Autonomous Underwater Vehicle for Abyssal Ocean Ecosystem Research

    Full text link
    The oceans have a major impact on the planet: they store 28% of the CO 2 pro- duced by humans, they act as the world’s thermal damper for temperature changes, and more than 17, 000 species call the deep oceans their home. Scientific drivers, like climate change, and commercial applications, like deep sea fisheries and underwater mining, are pushing the need to know more about oceans at depths beyond 1000 meters. However, the high cost associated with autonomous underwater vehicles (AUVs) capable of operating beyond the depth of 1000 meters has limited the study of the deep ocean. Traditional AUVs used for deep-sea navigation are large and typically weigh up- wards of 1000-kgs, thus requiring careful planning before deployment and multi- person teams to operate. This thesis proposes the use of a new vehicle design based around a low-cost oceanographic glass sphere as the main pressure enclosure to reduce its size and cost while maintaining the ability for deep-sea operation. This novel housing concept, together with a minimal sensor suite, enables environmental research at depths previously inaccessible at this price point. The key characteristic that enables the cost reduction of this platform is the removal of the Doppler velocity log (DVL) sensor, which is replaced by optical cameras. Cameras allow the vehicle to estimate its motion in the water, but also enable scientific applications such as identification of habitat types or population density estimation of benthic species. After each survey, images can be further processed to produce full, dense 3D models of the survey area. While underwater optical cameras are frequently placed inside pressure housings behind flat or domed viewports and used for visual navigation or 3D reconstructions, the underlying assumptions for those algorithms do not hold in the underwater domain. Refraction at the housing viewport, together with wavelength-dependent attenuation of light in water, render the ubiquitous pinhole camera model invalid. This thesis presents a quantitative evaluation of the errors introduced by underwater effects for 3D reconstruction applications, comparing low- and high-cost camera systems to quantify the trade-off between equipment cost and performance. Although the distortion effects created by underwater refraction of light have been extensively studied for more traditional viewports, the novel design proposed necessitates new research into modeling the lensing effect of this off-axis domed viewport. A novel calibration method is presented that explicitly models the effect of the glass interface on image formation based on the characterization of optical distortions. The method is capable of accurately finding the position of the camera within the dome and further enables the use of deconvolution to improve the quality of the taken image. Finally, this thesis presents the validation of the designed vehicle for optical surveying tasks and introduces a end-to-end ocean mapping pipeline to streamline AUV deployments, enabling efficient use of time and resources.PHDNaval Architecture & Marine EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155225/1/eiscar_1.pd

    High-resolution underwater robotic vision-based mapping and three-dimensional reconstruction for archaeology

    Get PDF
    Documenting underwater archaeological sites is an extremely challenging problem. Sites covering large areas are particularly daunting for traditional techniques. In this paper, we present a novel approach to this problem using both an autonomous underwater vehicle (AUV) and a diver-controlled stereo imaging platform to document the submerged Bronze Age city at Pavlopetri, Greece. The result is a three-dimensional (3D) reconstruction covering 26,600 m2 at a resolution of 2 mm/pixel, the largest-scale underwater optical 3D map, at such a resolution, in the world to date. We discuss the advances necessary to achieve this result, including i) an approach to color correct large numbers of images at varying altitudes and over varying bottom types; ii) a large-scale bundle adjustment framework that is capable of handling upward of 400,000 stereo images; and iii) a novel approach to the registration and rapid documentation of an underwater excavations area that can quickly produce maps of site change. We present visual and quantitative comparisons to the authors' previous underwater mapping approaches

    Optical Imaging and Image Restoration Techniques for Deep Ocean Mapping: A Comprehensive Survey

    Get PDF
    Visual systems are receiving increasing attention in underwater applications. While the photogrammetric and computer vision literature so far has largely targeted shallow water applications, recently also deep sea mapping research has come into focus. The majority of the seafloor, and of Earth’s surface, is located in the deep ocean below 200 m depth, and is still largely uncharted. Here, on top of general image quality degradation caused by water absorption and scattering, additional artificial illumination of the survey areas is mandatory that otherwise reside in permanent darkness as no sunlight reaches so deep. This creates unintended non-uniform lighting patterns in the images and non-isotropic scattering effects close to the camera. If not compensated properly, such effects dominate seafloor mosaics and can obscure the actual seafloor structures. Moreover, cameras must be protected from the high water pressure, e.g. by housings with thick glass ports, which can lead to refractive distortions in images. Additionally, no satellite navigation is available to support localization. All these issues render deep sea visual mapping a challenging task and most of the developed methods and strategies cannot be directly transferred to the seafloor in several kilometers depth. In this survey we provide a state of the art review of deep ocean mapping, starting from existing systems and challenges, discussing shallow and deep water models and corresponding solutions. Finally, we identify open issues for future lines of research

    Integration of a stereo vision system into an autonomous underwater vehicle for pipe manipulation tasks

    Get PDF
    Underwater object detection and recognition using computer vision are challenging tasks due to the poor light condition of submerged environments. For intervention missions requiring grasping and manipulation of submerged objects, a vision system must provide an Autonomous Underwater Vehicles (AUV) with object detection, localization and tracking capabilities. In this paper, we describe the integration of a vision system in the MARIS intervention AUV and its configuration for detecting cylindrical pipes, a typical artifact of interest in underwater operations. Pipe edges are tracked using an alpha-beta filter to achieve robustness and return a reliable pose estimation even in case of partial pipe visibility. Experiments in an outdoor water pool in different light conditions show that the adopted algorithmic approach allows detection of target pipes and provides a sufficiently accurate estimation of their pose even when they become partially visible, thereby supporting the AUV in several successful pipe grasping operations

    Autonomous underwater vehicle navigation and mapping in dynamic, unstructured environments

    Get PDF
    Thesis (Ph.D.)--Joint Program in Applied Ocean Science and Engineering (Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science; and the Woods Hole Oceanographic Institution), 2012.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Cataloged from student submitted PDF version of thesis.Includes bibliographical references (p. 91-98).This thesis presents a system for automatically building 3-D optical and bathymetric maps of underwater terrain using autonomous robots. The maps that are built improve the state of the art in resolution by an order of magnitude, while fusing bathymetric information from acoustic ranging sensors with visual texture captured by cameras. As part of the mapping process, several internal relationships between sensors are automatically calibrated, including the roll and pitch offsets of the velocity sensor, the attitude offset of the multibeam acoustic ranging sensor, and the full six-degree of freedom offset of the camera. The system uses pose graph optimization to simultaneously solve for the robot's trajectory, the map, and the camera location in the robot's frame, and takes into account the case where the terrain being mapped is drifting and rotating by estimating the orientation of the terrain at each time step in the robot's trajectory. Relative pose constraints are introduced into the pose graph based on multibeam submap matching using depth image correlation, while landmark-based constraints are used in the graph where visual features are available. The two types of constraints work in concert in a single optimization, fusing information from both types of mapping sensors and yielding a texture-mapped 3-D mesh for visualization. The optimization framework also allows for the straightforward introduction of constraints provided by the particular suite of sensors available, so that the navigation and mapping system presented works under a variety of deployment scenarios, including the potential incorporation of external localization systems such as long-baseline acoustic networks. Results of using the system to map the draft of rotating Antarctic ice floes are presented, as are results fusing optical and range data of a coral reef.by Clayton Gregory Kunz.Ph.D

    Localization, Mapping and SLAM in Marine and Underwater Environments

    Get PDF
    The use of robots in marine and underwater applications is growing rapidly. These applications share the common requirement of modeling the environment and estimating the robots’ pose. Although there are several mapping, SLAM, target detection and localization methods, marine and underwater environments have several challenging characteristics, such as poor visibility, water currents, communication issues, sonar inaccuracies or unstructured environments, that have to be considered. The purpose of this Special Issue is to present the current research trends in the topics of underwater localization, mapping, SLAM, and target detection and localization. To this end, we have collected seven articles from leading researchers in the field, and present the different approaches and methods currently being investigated to improve the performance of underwater robots

    Map building fusing acoustic and visual information using autonomous underwater vehicles

    Get PDF
    Author Posting. © The Author(s), 2012. This is the author's version of the work. It is posted here by permission of John Wiley & Sons for personal use, not for redistribution. The definitive version was published in Journal of Field Robotics 30 (2013): 763–783, doi:10.1002/rob.21473.We present a system for automatically building 3-D maps of underwater terrain fusing visual data from a single camera with range data from multibeam sonar. The six-degree of freedom location of the camera relative to the navigation frame is derived as part of the mapping process, as are the attitude offsets of the multibeam head and the on-board velocity sensor. The system uses pose graph optimization and the square root information smoothing and mapping framework to simultaneously solve for the robot’s trajectory, the map, and the camera location in the robot’s frame. Matched visual features are treated within the pose graph as images of 3-D landmarks, while multibeam bathymetry submap matches are used to impose relative pose constraints linking robot poses from distinct tracklines of the dive trajectory. The navigation and mapping system presented works under a variety of deployment scenarios, on robots with diverse sensor suites. Results of using the system to map the structure and appearance of a section of coral reef are presented using data acquired by the Seabed autonomous underwater vehicle.The work described herein was funded by the National Science Foundation Censsis ERC under grant number EEC-9986821, and by the National Oceanic and Atmospheric Administration under grant number NA090AR4320129

    OBJECT PERCEPTION IN UNDERWATER ENVIRONMENTS: A SURVEY ON SENSORS AND SENSING METHODOLOGIES

    Get PDF
    Underwater robots play a critical role in the marine industry. Object perception is the foundation for the automatic operations of submerged vehicles in dynamic aquatic environments. However, underwater perception encounters multiple environmental challenges, including rapid light attenuation, light refraction, or backscattering effect. These problems reduce the sensing devices’ signal-to-noise ratio (SNR), making underwater perception a complicated research topic. This paper describes the state-of-the-art sensing technologies and object perception techniques for underwater robots in different environmental conditions. Due to the current sensing modalities’ various constraints and characteristics, we divide the perception ranges into close-range, medium-range, and long-range. We survey and describe recent advances for each perception range and suggest some potential future research directions worthy of investigating in this field

    Needs and gaps in optical underwater technologies and methods for the investigation of marine animal forest 3D-structural complexity

    Get PDF
    Marine animal forests are benthic communities dominated by sessile suspension feeders (such as sponges, corals, and bivalves) able to generate three-dimensional (3D) frameworks with high structural complexity. The biodiversity and functioning of marine animal forests are strictly related to their 3D complexity. The present paper aims at providing new perspectives in underwater optical surveys. Starting from the current gaps in data collection and analysis that critically limit the study and conservation of marine animal forests, we discuss the main technological and methodological needs for the investigation of their 3D structural complexity at different spatial and temporal scales. Despite recent technological advances, it seems that several issues in data acquisition and processing need to be solved, to properly map the different benthic habitats in which marine animal forests are present, their health status and to measure structural complexity. Proper precision and accuracy should be chosen and assured in relation to the biological and ecological processes investigated. Besides, standardized methods and protocols are strictly necessary to meet the FAIR (findability, accessibility, interoperability, and reusability) data principles for the stewardship of habitat mapping and biodiversity, biomass, and growth data
    corecore