3,979 research outputs found

    Real-world Underwater Enhancement: Challenges, Benchmarks, and Solutions

    Full text link
    Underwater image enhancement is such an important low-level vision task with many applications that numerous algorithms have been proposed in recent years. These algorithms developed upon various assumptions demonstrate successes from various aspects using different data sets and different metrics. In this work, we setup an undersea image capturing system, and construct a large-scale Real-world Underwater Image Enhancement (RUIE) data set divided into three subsets. The three subsets target at three challenging aspects for enhancement, i.e., image visibility quality, color casts, and higher-level detection/classification, respectively. We conduct extensive and systematic experiments on RUIE to evaluate the effectiveness and limitations of various algorithms to enhance visibility and correct color casts on images with hierarchical categories of degradation. Moreover, underwater image enhancement in practice usually serves as a preprocessing step for mid-level and high-level vision tasks. We thus exploit the object detection performance on enhanced images as a brand new task-specific evaluation criterion. The findings from these evaluations not only confirm what is commonly believed, but also suggest promising solutions and new directions for visibility enhancement, color correction, and object detection on real-world underwater images.Comment: arXiv admin note: text overlap with arXiv:1712.04143 by other author

    Visual-Quality-Driven Learning for Underwater Vision Enhancement

    Full text link
    The image processing community has witnessed remarkable advances in enhancing and restoring images. Nevertheless, restoring the visual quality of underwater images remains a great challenge. End-to-end frameworks might fail to enhance the visual quality of underwater images since in several scenarios it is not feasible to provide the ground truth of the scene radiance. In this work, we propose a CNN-based approach that does not require ground truth data since it uses a set of image quality metrics to guide the restoration learning process. The experiments showed that our method improved the visual quality of underwater images preserving their edges and also performed well considering the UCIQE metric.Comment: Accepted for publication and presented in 2018 IEEE International Conference on Image Processing (ICIP

    Underwater Single Image Color Restoration Using Haze-Lines and a New Quantitative Dataset

    Full text link
    Underwater images suffer from color distortion and low contrast, because light is attenuated while it propagates through water. Attenuation under water varies with wavelength, unlike terrestrial images where attenuation is assumed to be spectrally uniform. The attenuation depends both on the water body and the 3D structure of the scene, making color restoration difficult. Unlike existing single underwater image enhancement techniques, our method takes into account multiple spectral profiles of different water types. By estimating just two additional global parameters: the attenuation ratios of the blue-red and blue-green color channels, the problem is reduced to single image dehazing, where all color channels have the same attenuation coefficients. Since the water type is unknown, we evaluate different parameters out of an existing library of water types. Each type leads to a different restored image and the best result is automatically chosen based on color distribution. We collected a dataset of images taken in different locations with varying water properties, showing color charts in the scenes. Moreover, to obtain ground truth, the 3D structure of the scene was calculated based on stereo imaging. This dataset enables a quantitative evaluation of restoration algorithms on natural images and shows the advantage of our method

    Single Image Dehazing through Improved Atmospheric Light Estimation

    Full text link
    Image contrast enhancement for outdoor vision is important for smart car auxiliary transport systems. The video frames captured in poor weather conditions are often characterized by poor visibility. Most image dehazing algorithms consider to use a hard threshold assumptions or user input to estimate atmospheric light. However, the brightest pixels sometimes are objects such as car lights or streetlights, especially for smart car auxiliary transport systems. Simply using a hard threshold may cause a wrong estimation. In this paper, we propose a single optimized image dehazing method that estimates atmospheric light efficiently and removes haze through the estimation of a semi-globally adaptive filter. The enhanced images are characterized with little noise and good exposure in dark regions. The textures and edges of the processed images are also enhanced significantly.Comment: Multimedia Tools and Applications (2015

    Towards Real-Time Advancement of Underwater Visual Quality with GAN

    Full text link
    Low visual quality has prevented underwater robotic vision from a wide range of applications. Although several algorithms have been developed, real-time and adaptive methods are deficient for real-world tasks. In this paper, we address this difficulty based on generative adversarial networks (GAN), and propose a GAN-based restoration scheme (GAN-RS). In particular, we develop a multi-branch discriminator including an adversarial branch and a critic branch for the purpose of simultaneously preserving image content and removing underwater noise. In addition to adversarial learning, a novel dark channel prior loss also promotes the generator to produce realistic vision. More specifically, an underwater index is investigated to describe underwater properties, and a loss function based on the underwater index is designed to train the critic branch for underwater noise suppression. Through extensive comparisons on visual quality and feature restoration, we confirm the superiority of the proposed approach. Consequently, the GAN-RS can adaptively improve underwater visual quality in real time and induce an overall superior restoration performance. Finally, a real-world experiment is conducted on the seabed for grasping marine products, and the results are quite promising. The source code is publicly available at https://github.com/SeanChenxy/GAN_RS

    Effects of Image Degradations to CNN-based Image Classification

    Full text link
    Just like many other topics in computer vision, image classification has achieved significant progress recently by using deep-learning neural networks, especially the Convolutional Neural Networks (CNN). Most of the existing works are focused on classifying very clear natural images, evidenced by the widely used image databases such as Caltech-256, PASCAL VOCs and ImageNet. However, in many real applications, the acquired images may contain certain degradations that lead to various kinds of blurring, noise, and distortions. One important and interesting problem is the effect of such degradations to the performance of CNN-based image classification. More specifically, we wonder whether image-classification performance drops with each kind of degradation, whether this drop can be avoided by including degraded images into training, and whether existing computer vision algorithms that attempt to remove such degradations can help improve the image-classification performance. In this paper, we empirically study this problem for four kinds of degraded images -- hazy images, underwater images, motion-blurred images and fish-eye images. For this study, we synthesize a large number of such degraded images by applying respective physical models to the clear natural images and collect a new hazy image dataset from the Internet. We expect this work can draw more interests from the community to study the classification of degraded images

    Fractional Multiscale Fusion-based De-hazing

    Full text link
    This report presents the results of a proposed multi-scale fusion-based single image de-hazing algorithm, which can also be used for underwater image enhancement. Furthermore, the algorithm was designed for very fast operation and minimal run-time. The proposed scheme is the faster than existing algorithms for both de-hazing and underwater image enhancement and amenable to digital hardware implementation. Results indicate mostly consistent and good results for both categories of images when compared with other algorithms from the literature.Comment: 23 pages, 13 figures, 2 table

    Underwater Multi-Robot Convoying using Visual Tracking by Detection

    Full text link
    We present a robust multi-robot convoying approach that relies on visual detection of the leading agent, thus enabling target following in unstructured 3-D environments. Our method is based on the idea of tracking-by-detection, which interleaves efficient model-based object detection with temporal filtering of image-based bounding box estimation. This approach has the important advantage of mitigating tracking drift (i.e. drifting away from the target object), which is a common symptom of model-free trackers and is detrimental to sustained convoying in practice. To illustrate our solution, we collected extensive footage of an underwater robot in ocean settings, and hand-annotated its location in each frame. Based on this dataset, we present an empirical comparison of multiple tracker variants, including the use of several convolutional neural networks, both with and without recurrent connections, as well as frequency-based model-free trackers. We also demonstrate the practicality of this tracking-by-detection strategy in real-world scenarios by successfully controlling a legged underwater robot in five degrees of freedom to follow another robot's independent motion.Comment: Accepted to IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 201

    Tracking Live Fish from Low-Contrast and Low-Frame-Rate Stereo Videos

    Full text link
    Non-extractive fish abundance estimation with the aid of visual analysis has drawn increasing attention. Unstable illumination, ubiquitous noise and low frame rate video capturing in the underwater environment, however, make conventional tracking methods unreliable. In this paper, we present a multiple fish tracking system for low-contrast and low-frame-rate stereo videos with the use of a trawl-based underwater camera system. An automatic fish segmentation algorithm overcomes the low-contrast issues by adopting a histogram backprojection approach on double local-thresholded images to ensure an accurate segmentation on the fish shape boundaries. Built upon a reliable feature-based object matching method, a multiple-target tracking algorithm via a modified Viterbi data association is proposed to overcome the poor motion continuity and frequent entrance/exit of fish targets under low-frame-rate scenarios. In addition, a computationally efficient block-matching approach performs successful stereo matching, which enables an automatic fish-body tail compensation to greatly reduce segmentation error and allows for an accurate fish length measurement. Experimental results show that an effective and reliable tracking performance for multiple live fish with underwater stereo cameras is achieved.Comment: 14 pages, 14 figures, 6 table

    UID2021: An Underwater Image Dataset for Evaluation of No-reference Quality Assessment Metrics

    Full text link
    Achieving subjective and objective quality assessment of underwater images is of high significance in underwater visual perception and image/video processing. However, the development of underwater image quality assessment (UIQA) is limited for the lack of comprehensive human subjective user study with publicly available dataset and reliable objective UIQA metric. To address this issue, we establish a large-scale underwater image dataset, dubbed UID2021, for evaluating no-reference UIQA metrics. The constructed dataset contains 60 multiply degraded underwater images collected from various sources, covering six common underwater scenes (i.e. bluish scene, bluish-green scene, greenish scene, hazy scene, low-light scene, and turbid scene), and their corresponding 900 quality improved versions generated by employing fifteen state-of-the-art underwater image enhancement and restoration algorithms. Mean opinion scores (MOS) for UID2021 are also obtained by using the pair comparison sorting method with 52 observers. Both in-air NR-IQA and underwater-specific algorithms are tested on our constructed dataset to fairly compare the performance and analyze their strengths and weaknesses. Our proposed UID2021 dataset enables ones to evaluate NR UIQA algorithms comprehensively and paves the way for further research on UIQA. Our UID2021 will be a free download and utilized for research purposes at: https://github.com/Hou-Guojia/UID2021
    corecore