203 research outputs found

    Real-time Model-based Image Color Correction for Underwater Robots

    Full text link
    Recently, a new underwater imaging formation model presented that the coefficients related to the direct and backscatter transmission signals are dependent on the type of water, camera specifications, water depth, and imaging range. This paper proposes an underwater color correction method that integrates this new model on an underwater robot, using information from a pressure depth sensor for water depth and a visual odometry system for estimating scene distance. Experiments were performed with and without a color chart over coral reefs and a shipwreck in the Caribbean. We demonstrate the performance of our proposed method by comparing it with other statistic-, physic-, and learning-based color correction methods. Applications for our proposed method include improved 3D reconstruction and more robust underwater robot navigation.Comment: Accepted at the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS

    A Robust Approach for Monocular Visual Odometry in Underwater Environments

    Get PDF
    This work presents a visual odometric system for camera tracking in underwater scenarios of the seafloor which are strongly perturbed with sunlight caustics and cloudy water. Particularly, we focuse on the performance and robustnes of the system, which structurally associates a deflickering filter with a visual tracker. Two state-of-the-art trackers are employed for our study, one pixel-oriented and the other feature-based. The contrivances of the trackers were crumbled and their suitability for underwater environments analyzed comparatively. To this end real subaquatic footages in perturbed environments were employed.Sociedad Argentina de Informática e Investigación Operativ

    Internet of Underwater Things and Big Marine Data Analytics -- A Comprehensive Survey

    Full text link
    The Internet of Underwater Things (IoUT) is an emerging communication ecosystem developed for connecting underwater objects in maritime and underwater environments. The IoUT technology is intricately linked with intelligent boats and ships, smart shores and oceans, automatic marine transportations, positioning and navigation, underwater exploration, disaster prediction and prevention, as well as with intelligent monitoring and security. The IoUT has an influence at various scales ranging from a small scientific observatory, to a midsized harbor, and to covering global oceanic trade. The network architecture of IoUT is intrinsically heterogeneous and should be sufficiently resilient to operate in harsh environments. This creates major challenges in terms of underwater communications, whilst relying on limited energy resources. Additionally, the volume, velocity, and variety of data produced by sensors, hydrophones, and cameras in IoUT is enormous, giving rise to the concept of Big Marine Data (BMD), which has its own processing challenges. Hence, conventional data processing techniques will falter, and bespoke Machine Learning (ML) solutions have to be employed for automatically learning the specific BMD behavior and features facilitating knowledge extraction and decision support. The motivation of this paper is to comprehensively survey the IoUT, BMD, and their synthesis. It also aims for exploring the nexus of BMD with ML. We set out from underwater data collection and then discuss the family of IoUT data communication techniques with an emphasis on the state-of-the-art research challenges. We then review the suite of ML solutions suitable for BMD handling and analytics. We treat the subject deductively from an educational perspective, critically appraising the material surveyed.Comment: 54 pages, 11 figures, 19 tables, IEEE Communications Surveys & Tutorials, peer-reviewed academic journa

    Optical Imaging and Image Restoration Techniques for Deep Ocean Mapping: A Comprehensive Survey

    Get PDF
    Visual systems are receiving increasing attention in underwater applications. While the photogrammetric and computer vision literature so far has largely targeted shallow water applications, recently also deep sea mapping research has come into focus. The majority of the seafloor, and of Earth’s surface, is located in the deep ocean below 200 m depth, and is still largely uncharted. Here, on top of general image quality degradation caused by water absorption and scattering, additional artificial illumination of the survey areas is mandatory that otherwise reside in permanent darkness as no sunlight reaches so deep. This creates unintended non-uniform lighting patterns in the images and non-isotropic scattering effects close to the camera. If not compensated properly, such effects dominate seafloor mosaics and can obscure the actual seafloor structures. Moreover, cameras must be protected from the high water pressure, e.g. by housings with thick glass ports, which can lead to refractive distortions in images. Additionally, no satellite navigation is available to support localization. All these issues render deep sea visual mapping a challenging task and most of the developed methods and strategies cannot be directly transferred to the seafloor in several kilometers depth. In this survey we provide a state of the art review of deep ocean mapping, starting from existing systems and challenges, discussing shallow and deep water models and corresponding solutions. Finally, we identify open issues for future lines of research

    OBJECT PERCEPTION IN UNDERWATER ENVIRONMENTS: A SURVEY ON SENSORS AND SENSING METHODOLOGIES

    Get PDF
    Underwater robots play a critical role in the marine industry. Object perception is the foundation for the automatic operations of submerged vehicles in dynamic aquatic environments. However, underwater perception encounters multiple environmental challenges, including rapid light attenuation, light refraction, or backscattering effect. These problems reduce the sensing devices’ signal-to-noise ratio (SNR), making underwater perception a complicated research topic. This paper describes the state-of-the-art sensing technologies and object perception techniques for underwater robots in different environmental conditions. Due to the current sensing modalities’ various constraints and characteristics, we divide the perception ranges into close-range, medium-range, and long-range. We survey and describe recent advances for each perception range and suggest some potential future research directions worthy of investigating in this field

    A Robust Approach for Monocular Visual Odometry in Underwater Environments

    Get PDF
    This work presents a visual odometric system for camera tracking in underwater scenarios of the seafloor which are strongly perturbed with sunlight caustics and cloudy water. Particularly, we focuse on the performance and robustnes of the system, which structurally associates a deflickering filter with a visual tracker. Two state-of-the-art trackers are employed for our study, one pixel-oriented and the other feature-based. The contrivances of the trackers were crumbled and their suitability for underwater environments analyzed comparatively. To this end real subaquatic footages in perturbed environments were employed.Sociedad Argentina de Informática e Investigación Operativ

    Virtually throwing benchmarks into the ocean for deep sea photogrammetry and image processing evaluation

    Get PDF
    Vision in the deep sea is acquiring increasing interest from many fields as the deep seafloor represents the largest surface portion onEarth. Unlike common shallow underwater imaging, deep sea imaging requires artificial lighting to illuminate the scene in perpetualdarkness. Deep sea images suffer from degradation caused by scattering, attenuation and effects of artificial light sources and havea very different appearance to images in shallow water or on land. This impairs transferring current vision methods to deep seaapplications. Development of adequate algorithms requires some data with ground truth in order to evaluate the methods. However,it is practically impossible to capture a deep sea scene also without water or artificial lighting effects. This situation impairs progressin deep sea vision research, where already synthesized images with ground truth could be a good solution. Most current methodseither render a virtual 3D model, or use atmospheric image formation models to convert real world scenes to appear as in shallowwater appearance illuminated by sunlight. Currently, there is a lack of image datasets dedicated to deep sea vision evaluation. Thispaper introduces a pipeline to synthesize deep sea images using existing real world RGB-D benchmarks, and exemplarily generatesthe deep sea twin datasets for the well known Middlebury stereo benchmarks. They can be used both for testing underwater stereomatching methods and for training and evaluating underwater image processing algorithms. This work aims towards establishingan image benchmark, which is intended particularly for deep sea vision developments

    Integration of a stereo vision system into an autonomous underwater vehicle for pipe manipulation tasks

    Get PDF
    Underwater object detection and recognition using computer vision are challenging tasks due to the poor light condition of submerged environments. For intervention missions requiring grasping and manipulation of submerged objects, a vision system must provide an Autonomous Underwater Vehicles (AUV) with object detection, localization and tracking capabilities. In this paper, we describe the integration of a vision system in the MARIS intervention AUV and its configuration for detecting cylindrical pipes, a typical artifact of interest in underwater operations. Pipe edges are tracked using an alpha-beta filter to achieve robustness and return a reliable pose estimation even in case of partial pipe visibility. Experiments in an outdoor water pool in different light conditions show that the adopted algorithmic approach allows detection of target pipes and provides a sufficiently accurate estimation of their pose even when they become partially visible, thereby supporting the AUV in several successful pipe grasping operations
    corecore