1,951 research outputs found
Multiple Linear Regression Haze-removal Model Based on Dark Channel Prior
Dark Channel Prior (DCP) is a widely recognized traditional dehazing
algorithm. However, it may fail in bright region and the brightness of the
restored image is darker than hazy image. In this paper, we propose an
effective method to optimize DCP. We build a multiple linear regression
haze-removal model based on DCP atmospheric scattering model and train this
model with RESIDE dataset, which aims to reduce the unexpected errors caused by
the rough estimations of transmission map t(x) and atmospheric light A. The
RESIDE dataset provides enough synthetic hazy images and their corresponding
groundtruth images to train and test. We compare the performances of different
dehazing algorithms in terms of two important full-reference metrics, the
peak-signal-to-noise ratio (PSNR) as well as the structural similarity index
measure (SSIM). The experiment results show that our model gets highest SSIM
value and its PSNR value is also higher than most of state-of-the-art dehazing
algorithms. Our results also overcome the weakness of DCP on real-world hazy
imagesComment: IEEE CPS (CSCI 2018 Int'l Conference
Advanced Multiple Linear Regression Based Dark Channel Prior Applied on Dehazing Image and Generating Synthetic Haze
Haze removal is an extremely challenging task, and object detection in the
hazy environment has recently gained much attention due to the popularity of
autonomous driving and traffic surveillance. In this work, the authors propose
a multiple linear regression haze removal model based on a widely adopted
dehazing algorithm named Dark Channel Prior. Training this model with a
synthetic hazy dataset, the proposed model can reduce the unanticipated
deviations generated from the rough estimations of transmission map and
atmospheric light in Dark Channel Prior. To increase object detection accuracy
in the hazy environment, the authors further present an algorithm to build a
synthetic hazy COCO training dataset by generating the artificial haze to the
MS COCO training dataset. The experimental results demonstrate that the
proposed model obtains higher image quality and shares more similarity with
ground truth images than most conventional pixel-based dehazing algorithms and
neural network based haze-removal models. The authors also evaluate the mean
average precision of Mask R-CNN when training the network with synthetic hazy
COCO training dataset and preprocessing test hazy dataset by removing the haze
with the proposed dehazing model. It turns out that both approaches can
increase the object detection accuracy significantly and outperform most
existing object detection models over hazy images
Joint Transmission Map Estimation and Dehazing using Deep Networks
Single image haze removal is an extremely challenging problem due to its
inherent ill-posed nature. Several prior-based and learning-based methods have
been proposed in the literature to solve this problem and they have achieved
superior results. However, most of the existing methods assume constant
atmospheric light model and tend to follow a two-step procedure involving
prior-based methods for estimating transmission map followed by calculation of
dehazed image using the closed form solution. In this paper, we relax the
constant atmospheric light assumption and propose a novel unified single image
dehazing network that jointly estimates the transmission map and performs
dehazing. In other words, our new approach provides an end-to-end learning
framework, where the inherent transmission map and dehazed result are learned
directly from the loss function. Extensive experiments on synthetic and real
datasets with challenging hazy images demonstrate that the proposed method
achieves significant improvements over the state-of-the-art methods.Comment: This paper has been accepted in IEEE-TCSV
Does Haze Removal Help CNN-based Image Classification?
Hazy images are common in real scenarios and many dehazing methods have been
developed to automatically remove the haze from images. Typically, the goal of
image dehazing is to produce clearer images from which human vision can better
identify the object and structural details present in the images. When the
ground-truth haze-free image is available for a hazy image, quantitative
evaluation of image dehazing is usually based on objective metrics, such as
Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM). However, in
many applications, large-scale images are collected not for visual examination
by human. Instead, they are used for many high-level vision tasks, such as
automatic classification, recognition and categorization. One fundamental
problem here is whether various dehazing methods can produce clearer images
that can help improve the performance of the high-level tasks. In this paper,
we empirically study this problem in the important task of image classification
by using both synthetic and real hazy image datasets. From the experimental
results, we find that the existing image-dehazing methods cannot improve much
the image-classification performance and sometimes even reduce the
image-classification performance
O-HAZE: a dehazing benchmark with real hazy and haze-free outdoor images
Haze removal or dehazing is a challenging ill-posed problem that has drawn a
significant attention in the last few years. Despite this growing interest, the
scientific community is still lacking a reference dataset to evaluate
objectively and quantitatively the performance of proposed dehazing methods.
The few datasets that are currently considered, both for assessment and
training of learning-based dehazing techniques, exclusively rely on synthetic
hazy images. To address this limitation, we introduce the first outdoor scenes
database (named O-HAZE) composed of pairs of real hazy and corresponding
haze-free images. In practice, hazy images have been captured in presence of
real haze, generated by professional haze machines, and OHAZE contains 45
different outdoor scenes depicting the same visual content recorded in
haze-free and hazy conditions, under the same illumination parameters. To
illustrate its usefulness, O-HAZE is used to compare a representative set of
state-of-the-art dehazing techniques, using traditional image quality metrics
such as PSNR, SSIM and CIEDE2000. This reveals the limitations of current
techniques, and questions some of their underlying assumptions.Comment: arXiv admin note: text overlap with arXiv:1804.0509
Cycle-Dehaze: Enhanced CycleGAN for Single Image Dehazing
In this paper, we present an end-to-end network, called Cycle-Dehaze, for
single image dehazing problem, which does not require pairs of hazy and
corresponding ground truth images for training. That is, we train the network
by feeding clean and hazy images in an unpaired manner. Moreover, the proposed
approach does not rely on estimation of the atmospheric scattering model
parameters. Our method enhances CycleGAN formulation by combining
cycle-consistency and perceptual losses in order to improve the quality of
textural information recovery and generate visually better haze-free images.
Typically, deep learning models for dehazing take low resolution images as
input and produce low resolution outputs. However, in the NTIRE 2018 challenge
on single image dehazing, high resolution images were provided. Therefore, we
apply bicubic downscaling. After obtaining low-resolution outputs from the
network, we utilize the Laplacian pyramid to upscale the output images to the
original resolution. We conduct experiments on NYU-Depth, I-HAZE, and O-HAZE
datasets. Extensive experiments demonstrate that the proposed approach improves
CycleGAN method both quantitatively and qualitatively.Comment: Accepted at CVPRW: NTIRE 201
Night Time Haze and Glow Removal using Deep Dilated Convolutional Network
In this paper, we address the single image haze removal problem in a
nighttime scene. The night haze removal is a severely ill-posed problem
especially due to the presence of various visible light sources with varying
colors and non-uniform illumination. These light sources are of different
shapes and introduce noticeable glow in night scenes. To address these effects
we introduce a deep learning based DeGlow-DeHaze iterative architecture which
accounts for varying color illumination and glows. First, our convolution
neural network (CNN) based DeGlow model is able to remove the glow effect
significantly and on top of it a separate DeHaze network is included to remove
the haze effect. For our recurrent network training, the hazy images and the
corresponding transmission maps are synthesized from the NYU depth datasets and
consequently restored a high-quality haze-free image. The experimental results
demonstrate that our hybrid CNN model outperforms other state-of-the-art
methods in terms of computation speed and image quality. We also show the
effectiveness of our model on a number of real images and compare our results
with the existing night haze heuristic models.Comment: 13 pages, 10 figures, 2 Table
Unsupervised Single Image Dehazing Using Dark Channel Prior Loss
Single image dehazing is a critical stage in many modern-day autonomous
vision applications. Early prior-based methods often involved a time-consuming
minimization of a hand-crafted energy function. Recent learning-based
approaches utilize the representational power of deep neural networks (DNNs) to
learn the underlying transformation between hazy and clear images. Due to
inherent limitations in collecting matching clear and hazy images, these
methods resort to training on synthetic data; constructed from indoor images
and corresponding depth information. This may result in a possible domain shift
when treating outdoor scenes. We propose a completely unsupervised method of
training via minimization of the well-known, Dark Channel Prior (DCP) energy
function. Instead of feeding the network with synthetic data, we solely use
real-world outdoor images and tune the network's parameters by directly
minimizing the DCP. Although our "Deep DCP" technique can be regarded as a fast
approximator of DCP, it actually improves its results significantly. This
suggests an additional regularization obtained via the network and learning
process. Experiments show that our method performs on par with large-scale
supervised methods
Haze Visibility Enhancement: A Survey and Quantitative Benchmarking
This paper provides a comprehensive survey of methods dealing with visibility
enhancement of images taken in hazy or foggy scenes. The survey begins with
discussing the optical models of atmospheric scattering media and image
formation. This is followed by a survey of existing methods, which are grouped
to multiple image methods, polarizing filters based methods, methods with known
depth, and single-image methods. We also provide a benchmark of a number of
well known single-image methods, based on a recent dataset provided by Fattal
and our newly generated scattering media dataset that contains ground truth
images for quantitative evaluation. To our knowledge, this is the first
benchmark using numerical metrics to evaluate dehazing techniques. This
benchmark allows us to objectively compare the results of existing methods and
to better identify the strengths and limitations of each method
A Cascaded Convolutional Neural Network for Single Image Dehazing
Images captured under outdoor scenes usually suffer from low contrast and
limited visibility due to suspended atmospheric particles, which directly
affects the quality of photos. Despite numerous image dehazing methods have
been proposed, effective hazy image restoration remains a challenging problem.
Existing learning-based methods usually predict the medium transmission by
Convolutional Neural Networks (CNNs), but ignore the key global atmospheric
light. Different from previous learning-based methods, we propose a flexible
cascaded CNN for single hazy image restoration, which considers the medium
transmission and global atmospheric light jointly by two task-driven
subnetworks. Specifically, the medium transmission estimation subnetwork is
inspired by the densely connected CNN while the global atmospheric light
estimation subnetwork is a light-weight CNN. Besides, these two subnetworks are
cascaded by sharing the common features. Finally, with the estimated model
parameters, the haze-free image is obtained by the atmospheric scattering model
inversion, which achieves more accurate and effective restoration performance.
Qualitatively and quantitatively experimental results on the synthetic and
real-world hazy images demonstrate that the proposed method effectively removes
haze from such images, and outperforms several state-of-the-art dehazing
methods.Comment: This manuscript is accepted by IEEE ACCES
- …