431 research outputs found

    Fractional Multiscale Fusion-based De-hazing

    Full text link
    This report presents the results of a proposed multi-scale fusion-based single image de-hazing algorithm, which can also be used for underwater image enhancement. Furthermore, the algorithm was designed for very fast operation and minimal run-time. The proposed scheme is the faster than existing algorithms for both de-hazing and underwater image enhancement and amenable to digital hardware implementation. Results indicate mostly consistent and good results for both categories of images when compared with other algorithms from the literature.Comment: 23 pages, 13 figures, 2 table

    Haze Visibility Enhancement: A Survey and Quantitative Benchmarking

    Full text link
    This paper provides a comprehensive survey of methods dealing with visibility enhancement of images taken in hazy or foggy scenes. The survey begins with discussing the optical models of atmospheric scattering media and image formation. This is followed by a survey of existing methods, which are grouped to multiple image methods, polarizing filters based methods, methods with known depth, and single-image methods. We also provide a benchmark of a number of well known single-image methods, based on a recent dataset provided by Fattal and our newly generated scattering media dataset that contains ground truth images for quantitative evaluation. To our knowledge, this is the first benchmark using numerical metrics to evaluate dehazing techniques. This benchmark allows us to objectively compare the results of existing methods and to better identify the strengths and limitations of each method

    Real-world Underwater Enhancement: Challenges, Benchmarks, and Solutions

    Full text link
    Underwater image enhancement is such an important low-level vision task with many applications that numerous algorithms have been proposed in recent years. These algorithms developed upon various assumptions demonstrate successes from various aspects using different data sets and different metrics. In this work, we setup an undersea image capturing system, and construct a large-scale Real-world Underwater Image Enhancement (RUIE) data set divided into three subsets. The three subsets target at three challenging aspects for enhancement, i.e., image visibility quality, color casts, and higher-level detection/classification, respectively. We conduct extensive and systematic experiments on RUIE to evaluate the effectiveness and limitations of various algorithms to enhance visibility and correct color casts on images with hierarchical categories of degradation. Moreover, underwater image enhancement in practice usually serves as a preprocessing step for mid-level and high-level vision tasks. We thus exploit the object detection performance on enhanced images as a brand new task-specific evaluation criterion. The findings from these evaluations not only confirm what is commonly believed, but also suggest promising solutions and new directions for visibility enhancement, color correction, and object detection on real-world underwater images.Comment: arXiv admin note: text overlap with arXiv:1712.04143 by other author

    Single Image Dehazing through Improved Atmospheric Light Estimation

    Full text link
    Image contrast enhancement for outdoor vision is important for smart car auxiliary transport systems. The video frames captured in poor weather conditions are often characterized by poor visibility. Most image dehazing algorithms consider to use a hard threshold assumptions or user input to estimate atmospheric light. However, the brightest pixels sometimes are objects such as car lights or streetlights, especially for smart car auxiliary transport systems. Simply using a hard threshold may cause a wrong estimation. In this paper, we propose a single optimized image dehazing method that estimates atmospheric light efficiently and removes haze through the estimation of a semi-globally adaptive filter. The enhanced images are characterized with little noise and good exposure in dark regions. The textures and edges of the processed images are also enhanced significantly.Comment: Multimedia Tools and Applications (2015

    O-HAZE: a dehazing benchmark with real hazy and haze-free outdoor images

    Full text link
    Haze removal or dehazing is a challenging ill-posed problem that has drawn a significant attention in the last few years. Despite this growing interest, the scientific community is still lacking a reference dataset to evaluate objectively and quantitatively the performance of proposed dehazing methods. The few datasets that are currently considered, both for assessment and training of learning-based dehazing techniques, exclusively rely on synthetic hazy images. To address this limitation, we introduce the first outdoor scenes database (named O-HAZE) composed of pairs of real hazy and corresponding haze-free images. In practice, hazy images have been captured in presence of real haze, generated by professional haze machines, and OHAZE contains 45 different outdoor scenes depicting the same visual content recorded in haze-free and hazy conditions, under the same illumination parameters. To illustrate its usefulness, O-HAZE is used to compare a representative set of state-of-the-art dehazing techniques, using traditional image quality metrics such as PSNR, SSIM and CIEDE2000. This reveals the limitations of current techniques, and questions some of their underlying assumptions.Comment: arXiv admin note: text overlap with arXiv:1804.0509

    Image Dehazing using Bilinear Composition Loss Function

    Full text link
    In this paper, we introduce a bilinear composition loss function to address the problem of image dehazing. Previous methods in image dehazing use a two-stage approach which first estimate the transmission map followed by clear image estimation. The drawback of a two-stage method is that it tends to boost local image artifacts such as noise, aliasing and blocking. This is especially the case for heavy haze images captured with a low quality device. Our method is based on convolutional neural networks. Unique in our method is the bilinear composition loss function which directly model the correlations between transmission map, clear image, and atmospheric light. This allows errors to be back-propagated to each sub-network concurrently, while maintaining the composition constraint to avoid overfitting of each sub-network. We evaluate the effectiveness of our proposed method using both synthetic and real world examples. Extensive experiments show that our method outperfoms state-of-the-art methods especially for haze images with severe noise level and compressions

    Analysis of Probabilistic multi-scale fractional order fusion-based de-hazing algorithm

    Full text link
    In this report, a de-hazing algorithm based on probability and multi-scale fractional order-based fusion is proposed. The proposed scheme improves on a previously implemented multiscale fraction order-based fusion by augmenting its local contrast and edge sharpening features. It also brightens de-hazed images, while avoiding sky region over-enhancement. The results of the proposed algorithm are analyzed and compared with existing methods from the literature and indicate better performance in most cases.Comment: 22 pages, 8 figures, journal preprin

    Gated Fusion Network for Single Image Dehazing

    Full text link
    In this paper, we propose an efficient algorithm to directly restore a clear image from a hazy input. The proposed algorithm hinges on an end-to-end trainable neural network that consists of an encoder and a decoder. The encoder is exploited to capture the context of the derived input images, while the decoder is employed to estimate the contribution of each input to the final dehazed result using the learned representations attributed to the encoder. The constructed network adopts a novel fusion-based strategy which derives three inputs from an original hazy image by applying White Balance (WB), Contrast Enhancing (CE), and Gamma Correction (GC). We compute pixel-wise confidence maps based on the appearance differences between these different inputs to blend the information of the derived inputs and preserve the regions with pleasant visibility. The final dehazed image is yielded by gating the important features of the derived inputs. To train the network, we introduce a multi-scale approach such that the halo artifacts can be avoided. Extensive experimental results on both synthetic and real-world images demonstrate that the proposed algorithm performs favorably against the state-of-the-art algorithms

    Joint Transmission Map Estimation and Dehazing using Deep Networks

    Full text link
    Single image haze removal is an extremely challenging problem due to its inherent ill-posed nature. Several prior-based and learning-based methods have been proposed in the literature to solve this problem and they have achieved superior results. However, most of the existing methods assume constant atmospheric light model and tend to follow a two-step procedure involving prior-based methods for estimating transmission map followed by calculation of dehazed image using the closed form solution. In this paper, we relax the constant atmospheric light assumption and propose a novel unified single image dehazing network that jointly estimates the transmission map and performs dehazing. In other words, our new approach provides an end-to-end learning framework, where the inherent transmission map and dehazed result are learned directly from the loss function. Extensive experiments on synthetic and real datasets with challenging hazy images demonstrate that the proposed method achieves significant improvements over the state-of-the-art methods.Comment: This paper has been accepted in IEEE-TCSV

    Multiple Linear Regression Haze-removal Model Based on Dark Channel Prior

    Full text link
    Dark Channel Prior (DCP) is a widely recognized traditional dehazing algorithm. However, it may fail in bright region and the brightness of the restored image is darker than hazy image. In this paper, we propose an effective method to optimize DCP. We build a multiple linear regression haze-removal model based on DCP atmospheric scattering model and train this model with RESIDE dataset, which aims to reduce the unexpected errors caused by the rough estimations of transmission map t(x) and atmospheric light A. The RESIDE dataset provides enough synthetic hazy images and their corresponding groundtruth images to train and test. We compare the performances of different dehazing algorithms in terms of two important full-reference metrics, the peak-signal-to-noise ratio (PSNR) as well as the structural similarity index measure (SSIM). The experiment results show that our model gets highest SSIM value and its PSNR value is also higher than most of state-of-the-art dehazing algorithms. Our results also overcome the weakness of DCP on real-world hazy imagesComment: IEEE CPS (CSCI 2018 Int'l Conference
    • …
    corecore