212 research outputs found
Does Haze Removal Help CNN-based Image Classification?
Hazy images are common in real scenarios and many dehazing methods have been
developed to automatically remove the haze from images. Typically, the goal of
image dehazing is to produce clearer images from which human vision can better
identify the object and structural details present in the images. When the
ground-truth haze-free image is available for a hazy image, quantitative
evaluation of image dehazing is usually based on objective metrics, such as
Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM). However, in
many applications, large-scale images are collected not for visual examination
by human. Instead, they are used for many high-level vision tasks, such as
automatic classification, recognition and categorization. One fundamental
problem here is whether various dehazing methods can produce clearer images
that can help improve the performance of the high-level tasks. In this paper,
we empirically study this problem in the important task of image classification
by using both synthetic and real hazy image datasets. From the experimental
results, we find that the existing image-dehazing methods cannot improve much
the image-classification performance and sometimes even reduce the
image-classification performance
Benchmarking Single Image Dehazing and Beyond
We present a comprehensive study and evaluation of existing single image
dehazing algorithms, using a new large-scale benchmark consisting of both
synthetic and real-world hazy images, called REalistic Single Image DEhazing
(RESIDE). RESIDE highlights diverse data sources and image contents, and is
divided into five subsets, each serving different training or evaluation
purposes. We further provide a rich variety of criteria for dehazing algorithm
evaluation, ranging from full-reference metrics, to no-reference metrics, to
subjective evaluation and the novel task-driven evaluation. Experiments on
RESIDE shed light on the comparisons and limitations of state-of-the-art
dehazing algorithms, and suggest promising future directions.Comment: IEEE Transactions on Image Processing(TIP 2019
Joint Transmission Map Estimation and Dehazing using Deep Networks
Single image haze removal is an extremely challenging problem due to its
inherent ill-posed nature. Several prior-based and learning-based methods have
been proposed in the literature to solve this problem and they have achieved
superior results. However, most of the existing methods assume constant
atmospheric light model and tend to follow a two-step procedure involving
prior-based methods for estimating transmission map followed by calculation of
dehazed image using the closed form solution. In this paper, we relax the
constant atmospheric light assumption and propose a novel unified single image
dehazing network that jointly estimates the transmission map and performs
dehazing. In other words, our new approach provides an end-to-end learning
framework, where the inherent transmission map and dehazed result are learned
directly from the loss function. Extensive experiments on synthetic and real
datasets with challenging hazy images demonstrate that the proposed method
achieves significant improvements over the state-of-the-art methods.Comment: This paper has been accepted in IEEE-TCSV
DR-Net: Transmission Steered Single Image Dehazing Network with Weakly Supervised Refinement
Despite the recent progress in image dehazing, several problems remain
largely unsolved such as robustness for varying scenes, the visual quality of
reconstructed images, and effectiveness and flexibility for applications. To
tackle these problems, we propose a new deep network architecture for single
image dehazing called DR-Net. Our model consists of three main subnetworks: a
transmission prediction network that predicts transmission map for the input
image, a haze removal network that reconstructs latent image steered by the
transmission map, and a refinement network that enhances the details and color
properties of the dehazed result via weakly supervised learning. Compared to
previous methods, our method advances in three aspects: (i) pure data-driven
model; (ii) the end-to-end system; (iii) superior robustness, accuracy, and
applicability. Extensive experiments demonstrate that our DR-Net outperforms
the state-of-the-art methods on both synthetic and real images in qualitative
and quantitative metrics. Additionally, the utility of DR-Net has been
illustrated by its potential usage in several important computer vision tasks.Comment: 8 pages, 8 figures, submitted to CVPR 201
Dense Haze: A benchmark for image dehazing with dense-haze and haze-free images
Single image dehazing is an ill-posed problem that has recently drawn
important attention. Despite the significant increase in interest shown for
dehazing over the past few years, the validation of the dehazing methods
remains largely unsatisfactory, due to the lack of pairs of real hazy and
corresponding haze-free reference images. To address this limitation, we
introduce Dense-Haze - a novel dehazing dataset. Characterized by dense and
homogeneous hazy scenes, Dense-Haze contains 33 pairs of real hazy and
corresponding haze-free images of various outdoor scenes. The hazy scenes have
been recorded by introducing real haze, generated by professional haze
machines. The hazy and haze-free corresponding scenes contain the same visual
content captured under the same illumination parameters. Dense-Haze dataset
aims to push significantly the state-of-the-art in single-image dehazing by
promoting robust methods for real and various hazy scenes. We also provide a
comprehensive qualitative and quantitative evaluation of state-of-the-art
single image dehazing techniques based on the Dense-Haze dataset. Not
surprisingly, our study reveals that the existing dehazing techniques perform
poorly for dense homogeneous hazy scenes and that there is still much room for
improvement.Comment: 5 pages, 2 figure
The Effectiveness of Instance Normalization: a Strong Baseline for Single Image Dehazing
We propose a novel deep neural network architecture for the challenging
problem of single image dehazing, which aims to recover the clear image from a
degraded hazy image. Instead of relying on hand-crafted image priors or
explicitly estimating the components of the widely used atmospheric scattering
model, our end-to-end system directly generates the clear image from an input
hazy image. The proposed network has an encoder-decoder architecture with skip
connections and instance normalization. We adopt the convolutional layers of
the pre-trained VGG network as encoder to exploit the representation power of
deep features, and demonstrate the effectiveness of instance normalization for
image dehazing. Our simple yet effective network outperforms the
state-of-the-art methods by a large margin on the benchmark datasets
An All-in-One Network for Dehazing and Beyond
This paper proposes an image dehazing model built with a convolutional neural
network (CNN), called All-in-One Dehazing Network (AOD-Net). It is designed
based on a re-formulated atmospheric scattering model. Instead of estimating
the transmission matrix and the atmospheric light separately as most previous
models did, AOD-Net directly generates the clean image through a light-weight
CNN. Such a novel end-to-end design makes it easy to embed AOD-Net into other
deep models, e.g., Faster R-CNN, for improving high-level task performance on
hazy images. Experimental results on both synthesized and natural hazy image
datasets demonstrate our superior performance than the state-of-the-art in
terms of PSNR, SSIM and the subjective visual quality. Furthermore, when
concatenating AOD-Net with Faster R-CNN and training the joint pipeline from
end to end, we witness a large improvement of the object detection performance
on hazy images
Multiple Linear Regression Haze-removal Model Based on Dark Channel Prior
Dark Channel Prior (DCP) is a widely recognized traditional dehazing
algorithm. However, it may fail in bright region and the brightness of the
restored image is darker than hazy image. In this paper, we propose an
effective method to optimize DCP. We build a multiple linear regression
haze-removal model based on DCP atmospheric scattering model and train this
model with RESIDE dataset, which aims to reduce the unexpected errors caused by
the rough estimations of transmission map t(x) and atmospheric light A. The
RESIDE dataset provides enough synthetic hazy images and their corresponding
groundtruth images to train and test. We compare the performances of different
dehazing algorithms in terms of two important full-reference metrics, the
peak-signal-to-noise ratio (PSNR) as well as the structural similarity index
measure (SSIM). The experiment results show that our model gets highest SSIM
value and its PSNR value is also higher than most of state-of-the-art dehazing
algorithms. Our results also overcome the weakness of DCP on real-world hazy
imagesComment: IEEE CPS (CSCI 2018 Int'l Conference
Image Dehazing using Bilinear Composition Loss Function
In this paper, we introduce a bilinear composition loss function to address
the problem of image dehazing. Previous methods in image dehazing use a
two-stage approach which first estimate the transmission map followed by clear
image estimation. The drawback of a two-stage method is that it tends to boost
local image artifacts such as noise, aliasing and blocking. This is especially
the case for heavy haze images captured with a low quality device. Our method
is based on convolutional neural networks. Unique in our method is the bilinear
composition loss function which directly model the correlations between
transmission map, clear image, and atmospheric light. This allows errors to be
back-propagated to each sub-network concurrently, while maintaining the
composition constraint to avoid overfitting of each sub-network. We evaluate
the effectiveness of our proposed method using both synthetic and real world
examples. Extensive experiments show that our method outperfoms
state-of-the-art methods especially for haze images with severe noise level and
compressions
Unsupervised Single Image Dehazing Using Dark Channel Prior Loss
Single image dehazing is a critical stage in many modern-day autonomous
vision applications. Early prior-based methods often involved a time-consuming
minimization of a hand-crafted energy function. Recent learning-based
approaches utilize the representational power of deep neural networks (DNNs) to
learn the underlying transformation between hazy and clear images. Due to
inherent limitations in collecting matching clear and hazy images, these
methods resort to training on synthetic data; constructed from indoor images
and corresponding depth information. This may result in a possible domain shift
when treating outdoor scenes. We propose a completely unsupervised method of
training via minimization of the well-known, Dark Channel Prior (DCP) energy
function. Instead of feeding the network with synthetic data, we solely use
real-world outdoor images and tune the network's parameters by directly
minimizing the DCP. Although our "Deep DCP" technique can be regarded as a fast
approximator of DCP, it actually improves its results significantly. This
suggests an additional regularization obtained via the network and learning
process. Experiments show that our method performs on par with large-scale
supervised methods
- …