56 research outputs found

    Underwater Computer Vision - Fish Recognition

    Get PDF
    The Underwater Computer Vision – Fish Recognition project includes the design and implementation of a device that can withstand staying underwater for a duration of time, take pictures of underwater creatures, such as fish, and be able to identify certain fish. The system is meant to be cheap to create, yet still able to process the images it takes and identify the objects in the pictures with some accuracy. The device can output its results to another device or an end user

    Generation and processing of simulated underwater images for infrastructure visual inspection with UUVs

    Get PDF
    The development of computer vision algorithms for navigation or object detection is one of the key issues of underwater robotics. However, extracting features from underwater images is challenging due to the presence of lighting defects, which need to be counteracted. This requires good environmental knowledge, either as a dataset or as a physic model. The lack of available data, and the high variability of the conditions, makes difficult the development of robust enhancement algorithms. A framework for the development of underwater computer vision algorithms is presented, consisting of a method for underwater imaging simulation, and an image enhancement algorithm, both integrated in the open-source robotics simulator UUV Simulator. The imaging simulation is based on a novel combination of the scattering model and style transfer techniques. The use of style transfer allows a realistic simulation of different environments without any prior knowledge of them. Moreover, an enhancement algorithm that successfully performs a correction of the imaging defects in any given scenario for either the real or synthetic images has been developed. The proposed approach showcases then a novel framework for the development of underwater computer vision algorithms for SLAM, navigation, or object detection in UUV

    SubmergeStyleGAN:Synthetic Underwater Data Generation with Style Transfer for Domain Adaptation

    Get PDF
    Underwater computer vision applications are challenged by limited access to annotated underwater datasets. Additionally, convolutional neural networks (CNNs) trained on in-air datasets do not perform well underwater due to the high domain variance caused by the degradation impact of the water column. This paper proposes an air-to-water dataset generator to create visually plausible underwater scenes out of existing in-air datasets. SubmergeStyleGAN, a generative adversarial network (GAN) designed to model attenuation, backscattering, and absorption, utilizes depth maps to apply range-dependent attenuation style transfer. In this work, the generated attenuated images and their corresponding original pairs are used to train an underwater image enhancement CNN. Real underwater datasets were used to validate the proposed approach by assessing various image quality metrics, including UCIQE, UIQM and CCF, as well as disparity estimation accuracy before and after enhancement. SubmergeStyleGAN exhibits a faster and more robust training procedure compared to existing methods in the literature

    Deep Sea Robotic Imaging Simulator

    Get PDF
    Nowadays underwater vision systems are being widely applied in ocean research. However, the largest portion of the ocean - the deep sea - still remains mostly unexplored. Only relatively few image sets have been taken from the deep sea due to the physical limitations caused by technical challenges and enormous costs. Deep sea images are very different from the images taken in shallow waters and this area did not get much attention from the community. The shortage of deep sea images and the corresponding ground truth data for evaluation and training is becoming a bottleneck for the development of underwater computer vision methods. Thus, this paper presents a physical model-based image simulation solution, which uses an in-air texture and depth information as inputs, to generate underwater image sequences taken by robots in deep ocean scenarios. Different from shallow water conditions, artificial illumination plays a vital role in deep sea image formation as it strongly affects the scene appearance. Our radiometric image formation model considers both attenuation and scattering effects with co-moving spotlights in the dark. By detailed analysis and evaluation of the underwater image formation model, we propose a 3D lookup table structure in combination with a novel rendering strategy to improve simulation performance. This enables us to integrate an interactive deep sea robotic vision simulation in the Unmanned Underwater Vehicles simulator. To inspire further deep sea vision research by the community, we release the source code of our deep sea image converter to the public (https://www.geomar.de/en/omv-research/robotic-imaging-simulator)

    A realistic fish-habitat dataset to evaluate algorithms for underwater visual analysis

    Get PDF
    Visual analysis of complex fish habitats is an important step towards sustainable fisheries for human consumption and environmental protection. Deep Learning methods have shown great promise for scene analysis when trained on large-scale datasets. However, current datasets for fish analysis tend to focus on the classification task within constrained, plain environments which do not capture the complexity of underwater fish habitats. To address this limitation, we present DeepFish as a benchmark suite with a large-scale dataset to train and test methods for several computer vision tasks. The dataset consists of approximately 40 thousand images collected underwater from 20 habitats in the marine-environments of tropical Australia. The dataset originally contained only classification labels. Thus, we collected point-level and segmentation labels to have a more comprehensive fish analysis benchmark. These labels enable models to learn to automatically monitor fish count, identify their locations, and estimate their sizes. Our experiments provide an in-depth analysis of the dataset characteristics, and the performance evaluation of several state-of-the-art approaches based on our benchmark. Although models pre-trained on ImageNet have successfully performed on this benchmark, there is still room for improvement. Therefore, this benchmark serves as a testbed to motivate further development in this challenging domain of underwater computer vision

    Underwater Gesture Recognition Using Classical Computer Vision and Deep Learning Techniques

    Get PDF
    Underwater Gesture Recognition is a challenging task since conditions which are normally not an issue in gesture recognition on land must be considered. Such issues include low visibility, low contrast, and unequal spectral propagation. In this work, we explore the underwater gesture recognition problem by taking on the recently released Cognitive Autonomous Diving Buddy Underwater Gestures dataset. The contributions of this paper are as follows: (1) Use traditional computer vision techniques along with classical machine learning to perform gesture recognition on the CADDY dataset; (2) Apply deep learning using a convolutional neural network to solve the same problem; (3) Perform confusion matrix analysis to determine the types of gestures that are relatively difficult to recognize and understand why; (4) Compare the performance of the methods above in terms of accuracy and inference speed. We achieve up to 97.06% accuracy with our CNN. To the best of our knowledge, our work is one of the earliest attempts, if not the first, to apply computer vision and machine learning techniques for gesture recognition on the said dataset. As such, we hope this work will serve as a benchmark for future work on the CADDY dataset

    Integration of a stereo vision system into an autonomous underwater vehicle for pipe manipulation tasks

    Get PDF
    Underwater object detection and recognition using computer vision are challenging tasks due to the poor light condition of submerged environments. For intervention missions requiring grasping and manipulation of submerged objects, a vision system must provide an Autonomous Underwater Vehicles (AUV) with object detection, localization and tracking capabilities. In this paper, we describe the integration of a vision system in the MARIS intervention AUV and its configuration for detecting cylindrical pipes, a typical artifact of interest in underwater operations. Pipe edges are tracked using an alpha-beta filter to achieve robustness and return a reliable pose estimation even in case of partial pipe visibility. Experiments in an outdoor water pool in different light conditions show that the adopted algorithmic approach allows detection of target pipes and provides a sufficiently accurate estimation of their pose even when they become partially visible, thereby supporting the AUV in several successful pipe grasping operations
    • …
    corecore