20,708 research outputs found

    Real-world Underwater Enhancement: Challenges, Benchmarks, and Solutions

    Full text link
    Underwater image enhancement is such an important low-level vision task with many applications that numerous algorithms have been proposed in recent years. These algorithms developed upon various assumptions demonstrate successes from various aspects using different data sets and different metrics. In this work, we setup an undersea image capturing system, and construct a large-scale Real-world Underwater Image Enhancement (RUIE) data set divided into three subsets. The three subsets target at three challenging aspects for enhancement, i.e., image visibility quality, color casts, and higher-level detection/classification, respectively. We conduct extensive and systematic experiments on RUIE to evaluate the effectiveness and limitations of various algorithms to enhance visibility and correct color casts on images with hierarchical categories of degradation. Moreover, underwater image enhancement in practice usually serves as a preprocessing step for mid-level and high-level vision tasks. We thus exploit the object detection performance on enhanced images as a brand new task-specific evaluation criterion. The findings from these evaluations not only confirm what is commonly believed, but also suggest promising solutions and new directions for visibility enhancement, color correction, and object detection on real-world underwater images.Comment: arXiv admin note: text overlap with arXiv:1712.04143 by other author

    Fractional Multiscale Fusion-based De-hazing

    Full text link
    This report presents the results of a proposed multi-scale fusion-based single image de-hazing algorithm, which can also be used for underwater image enhancement. Furthermore, the algorithm was designed for very fast operation and minimal run-time. The proposed scheme is the faster than existing algorithms for both de-hazing and underwater image enhancement and amenable to digital hardware implementation. Results indicate mostly consistent and good results for both categories of images when compared with other algorithms from the literature.Comment: 23 pages, 13 figures, 2 table

    An Image Based Technique for Enhancement of Underwater Images

    Full text link
    The underwater images usually suffers from non-uniform lighting, low contrast, blur and diminished colors. In this paper, we proposed an image based preprocessing technique to enhance the quality of the underwater images. The proposed technique comprises a combination of four filters such as homomorphic filtering, wavelet denoising, bilateral filter and contrast equalization. These filters are applied sequentially on degraded underwater images. The literature survey reveals that image based preprocessing algorithms uses standard filter techniques with various combinations. For smoothing the image, the image based preprocessing algorithms uses the anisotropic filter. The main drawback of the anisotropic filter is that iterative in nature and computation time is high compared to bilateral filter. In the proposed technique, in addition to other three filters, we employ a bilateral filter for smoothing the image. The experimentation is carried out in two stages. In the first stage, we have conducted various experiments on captured images and estimated optimal parameters for bilateral filter. Similarly, optimal filter bank and optimal wavelet shrinkage function are estimated for wavelet denoising. In the second stage, we conducted the experiments using estimated optimal parameters, optimal filter bank and optimal wavelet shrinkage function for evaluating the proposed technique. We evaluated the technique using quantitative based criteria such as a gradient magnitude histogram and Peak Signal to Noise Ratio (PSNR). Further, the results are qualitatively evaluated based on edge detection results. The proposed technique enhances the quality of the underwater images and can be employed prior to apply computer vision techniques

    Fast Underwater Image Enhancement for Improved Visual Perception

    Full text link
    In this paper, we present a conditional generative adversarial network-based model for real-time underwater image enhancement. To supervise the adversarial training, we formulate an objective function that evaluates the perceptual image quality based on its global content, color, local texture, and style information. We also present EUVP, a large-scale dataset of a paired and unpaired collection of underwater images (of `poor' and `good' quality) that are captured using seven different cameras over various visibility conditions during oceanic explorations and human-robot collaborative experiments. In addition, we perform several qualitative and quantitative evaluations which suggest that the proposed model can learn to enhance underwater image quality from both paired and unpaired training. More importantly, the enhanced images provide improved performances of standard models for underwater object detection, human pose estimation, and saliency prediction. These results validate that it is suitable for real-time preprocessing in the autonomy pipeline by visually-guided underwater robots. The model and associated training pipelines are available at https://github.com/xahidbuffon/funie-gan

    Dense Haze: A benchmark for image dehazing with dense-haze and haze-free images

    Full text link
    Single image dehazing is an ill-posed problem that has recently drawn important attention. Despite the significant increase in interest shown for dehazing over the past few years, the validation of the dehazing methods remains largely unsatisfactory, due to the lack of pairs of real hazy and corresponding haze-free reference images. To address this limitation, we introduce Dense-Haze - a novel dehazing dataset. Characterized by dense and homogeneous hazy scenes, Dense-Haze contains 33 pairs of real hazy and corresponding haze-free images of various outdoor scenes. The hazy scenes have been recorded by introducing real haze, generated by professional haze machines. The hazy and haze-free corresponding scenes contain the same visual content captured under the same illumination parameters. Dense-Haze dataset aims to push significantly the state-of-the-art in single-image dehazing by promoting robust methods for real and various hazy scenes. We also provide a comprehensive qualitative and quantitative evaluation of state-of-the-art single image dehazing techniques based on the Dense-Haze dataset. Not surprisingly, our study reveals that the existing dehazing techniques perform poorly for dense homogeneous hazy scenes and that there is still much room for improvement.Comment: 5 pages, 2 figure

    Underwater Single Image Color Restoration Using Haze-Lines and a New Quantitative Dataset

    Full text link
    Underwater images suffer from color distortion and low contrast, because light is attenuated while it propagates through water. Attenuation under water varies with wavelength, unlike terrestrial images where attenuation is assumed to be spectrally uniform. The attenuation depends both on the water body and the 3D structure of the scene, making color restoration difficult. Unlike existing single underwater image enhancement techniques, our method takes into account multiple spectral profiles of different water types. By estimating just two additional global parameters: the attenuation ratios of the blue-red and blue-green color channels, the problem is reduced to single image dehazing, where all color channels have the same attenuation coefficients. Since the water type is unknown, we evaluate different parameters out of an existing library of water types. Each type leads to a different restored image and the best result is automatically chosen based on color distribution. We collected a dataset of images taken in different locations with varying water properties, showing color charts in the scenes. Moreover, to obtain ground truth, the 3D structure of the scene was calculated based on stereo imaging. This dataset enables a quantitative evaluation of restoration algorithms on natural images and shows the advantage of our method

    Night Time Haze and Glow Removal using Deep Dilated Convolutional Network

    Full text link
    In this paper, we address the single image haze removal problem in a nighttime scene. The night haze removal is a severely ill-posed problem especially due to the presence of various visible light sources with varying colors and non-uniform illumination. These light sources are of different shapes and introduce noticeable glow in night scenes. To address these effects we introduce a deep learning based DeGlow-DeHaze iterative architecture which accounts for varying color illumination and glows. First, our convolution neural network (CNN) based DeGlow model is able to remove the glow effect significantly and on top of it a separate DeHaze network is included to remove the haze effect. For our recurrent network training, the hazy images and the corresponding transmission maps are synthesized from the NYU depth datasets and consequently restored a high-quality haze-free image. The experimental results demonstrate that our hybrid CNN model outperforms other state-of-the-art methods in terms of computation speed and image quality. We also show the effectiveness of our model on a number of real images and compare our results with the existing night haze heuristic models.Comment: 13 pages, 10 figures, 2 Table

    Fast Single Image Dehazing via Multilevel Wavelet Transform based Optimization

    Full text link
    The quality of images captured in outdoor environments can be affected by poor weather conditions such as fog, dust, and atmospheric scattering of other particles. This problem can bring extra challenges to high-level computer vision tasks like image segmentation and object detection. However, previous studies on image dehazing suffer from a huge computational workload and corruption of the original image, such as over-saturation and halos. In this paper, we present a novel image dehazing approach based on the optical model for haze images and regularized optimization. Specifically, we convert the non-convex, bilinear problem concerning the unknown haze-free image and light transmission distribution to a convex, linear optimization problem by estimating the atmosphere light constant. Our method is further accelerated by introducing a multilevel Haar wavelet transform. The optimization, instead, is applied to the low frequency sub-band decomposition of the original image. This dimension reduction significantly improves the processing speed of our method and exhibits the potential for real-time applications. Experimental results show that our approach outperforms state-of-the-art dehazing algorithms in terms of both image reconstruction quality and computational efficiency. For implementation details, source code can be publicly accessed via http://github.com/JiaxiHe/Image-and-Video-Dehazing.Comment: 23 pages, 13 figure

    Single Image Restoration for Participating Media Based on Prior Fusion

    Full text link
    This paper describes a method to restore degraded images captured in a participating media -- fog, turbid water, sand storm, etc. Differently from the related work that only deal with a medium, we obtain generality by using an image formation model and a fusion of new image priors. The model considers the image color variation produced by the medium. The proposed restoration method is based on the fusion of these priors and supported by statistics collected on images acquired in both non-participating and participating media. The key of the method is to fuse two complementary measures --- local contrast and color data. The obtained results on underwater and foggy images demonstrate the capabilities of the proposed method. Moreover, we evaluated our method using a special dataset for which a ground-truth image is available.Comment: This paper is under consideration at Pattern Recognition Letter

    Analysis of Probabilistic multi-scale fractional order fusion-based de-hazing algorithm

    Full text link
    In this report, a de-hazing algorithm based on probability and multi-scale fractional order-based fusion is proposed. The proposed scheme improves on a previously implemented multiscale fraction order-based fusion by augmenting its local contrast and edge sharpening features. It also brightens de-hazed images, while avoiding sky region over-enhancement. The results of the proposed algorithm are analyzed and compared with existing methods from the literature and indicate better performance in most cases.Comment: 22 pages, 8 figures, journal preprin
    • …
    corecore