323 research outputs found
Progressive Feature Fusion Network for Realistic Image Dehazing
Single image dehazing is a challenging ill-posed restoration problem. Various
prior-based and learning-based methods have been proposed. Most of them follow
a classic atmospheric scattering model which is an elegant simplified physical
model based on the assumption of single-scattering and homogeneous atmospheric
medium. The formulation of haze in realistic environment is more complicated.
In this paper, we propose to take its essential mechanism as "black box", and
focus on learning an input-adaptive trainable end-to-end dehazing model. An
U-Net like encoder-decoder deep network via progressive feature fusions has
been proposed to directly learn highly nonlinear transformation function from
observed hazy image to haze-free ground-truth. The proposed network is
evaluated on two public image dehazing benchmarks. The experiments demonstrate
that it can achieve superior performance when compared with popular
state-of-the-art methods. With efficient GPU memory usage, it can
satisfactorily recover ultra high definition hazed image up to 4K resolution,
which is unaffordable by many deep learning based dehazing algorithms.Comment: 14 pages, 7 figures, 1 tables, accepted by ACCV201
An All-in-One Network for Dehazing and Beyond
This paper proposes an image dehazing model built with a convolutional neural
network (CNN), called All-in-One Dehazing Network (AOD-Net). It is designed
based on a re-formulated atmospheric scattering model. Instead of estimating
the transmission matrix and the atmospheric light separately as most previous
models did, AOD-Net directly generates the clean image through a light-weight
CNN. Such a novel end-to-end design makes it easy to embed AOD-Net into other
deep models, e.g., Faster R-CNN, for improving high-level task performance on
hazy images. Experimental results on both synthesized and natural hazy image
datasets demonstrate our superior performance than the state-of-the-art in
terms of PSNR, SSIM and the subjective visual quality. Furthermore, when
concatenating AOD-Net with Faster R-CNN and training the joint pipeline from
end to end, we witness a large improvement of the object detection performance
on hazy images
A Cascaded Convolutional Neural Network for Single Image Dehazing
Images captured under outdoor scenes usually suffer from low contrast and
limited visibility due to suspended atmospheric particles, which directly
affects the quality of photos. Despite numerous image dehazing methods have
been proposed, effective hazy image restoration remains a challenging problem.
Existing learning-based methods usually predict the medium transmission by
Convolutional Neural Networks (CNNs), but ignore the key global atmospheric
light. Different from previous learning-based methods, we propose a flexible
cascaded CNN for single hazy image restoration, which considers the medium
transmission and global atmospheric light jointly by two task-driven
subnetworks. Specifically, the medium transmission estimation subnetwork is
inspired by the densely connected CNN while the global atmospheric light
estimation subnetwork is a light-weight CNN. Besides, these two subnetworks are
cascaded by sharing the common features. Finally, with the estimated model
parameters, the haze-free image is obtained by the atmospheric scattering model
inversion, which achieves more accurate and effective restoration performance.
Qualitatively and quantitatively experimental results on the synthetic and
real-world hazy images demonstrate that the proposed method effectively removes
haze from such images, and outperforms several state-of-the-art dehazing
methods.Comment: This manuscript is accepted by IEEE ACCES
NTIRE 2020 Challenge on NonHomogeneous Dehazing
This paper reviews the NTIRE 2020 Challenge on NonHomogeneous Dehazing of
images (restoration of rich details in hazy image). We focus on the proposed
solutions and their results evaluated on NH-Haze, a novel dataset consisting of
55 pairs of real haze free and nonhomogeneous hazy images recorded outdoor.
NH-Haze is the first realistic nonhomogeneous haze dataset that provides ground
truth images. The nonhomogeneous haze has been produced using a professional
haze generator that imitates the real conditions of haze scenes. 168
participants registered in the challenge and 27 teams competed in the final
testing phase. The proposed solutions gauge the state-of-the-art in image
dehazing.Comment: CVPR Workshops Proceedings 202
Does Haze Removal Help CNN-based Image Classification?
Hazy images are common in real scenarios and many dehazing methods have been
developed to automatically remove the haze from images. Typically, the goal of
image dehazing is to produce clearer images from which human vision can better
identify the object and structural details present in the images. When the
ground-truth haze-free image is available for a hazy image, quantitative
evaluation of image dehazing is usually based on objective metrics, such as
Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM). However, in
many applications, large-scale images are collected not for visual examination
by human. Instead, they are used for many high-level vision tasks, such as
automatic classification, recognition and categorization. One fundamental
problem here is whether various dehazing methods can produce clearer images
that can help improve the performance of the high-level tasks. In this paper,
we empirically study this problem in the important task of image classification
by using both synthetic and real hazy image datasets. From the experimental
results, we find that the existing image-dehazing methods cannot improve much
the image-classification performance and sometimes even reduce the
image-classification performance
Night Time Haze and Glow Removal using Deep Dilated Convolutional Network
In this paper, we address the single image haze removal problem in a
nighttime scene. The night haze removal is a severely ill-posed problem
especially due to the presence of various visible light sources with varying
colors and non-uniform illumination. These light sources are of different
shapes and introduce noticeable glow in night scenes. To address these effects
we introduce a deep learning based DeGlow-DeHaze iterative architecture which
accounts for varying color illumination and glows. First, our convolution
neural network (CNN) based DeGlow model is able to remove the glow effect
significantly and on top of it a separate DeHaze network is included to remove
the haze effect. For our recurrent network training, the hazy images and the
corresponding transmission maps are synthesized from the NYU depth datasets and
consequently restored a high-quality haze-free image. The experimental results
demonstrate that our hybrid CNN model outperforms other state-of-the-art
methods in terms of computation speed and image quality. We also show the
effectiveness of our model on a number of real images and compare our results
with the existing night haze heuristic models.Comment: 13 pages, 10 figures, 2 Table
The Effectiveness of Instance Normalization: a Strong Baseline for Single Image Dehazing
We propose a novel deep neural network architecture for the challenging
problem of single image dehazing, which aims to recover the clear image from a
degraded hazy image. Instead of relying on hand-crafted image priors or
explicitly estimating the components of the widely used atmospheric scattering
model, our end-to-end system directly generates the clear image from an input
hazy image. The proposed network has an encoder-decoder architecture with skip
connections and instance normalization. We adopt the convolutional layers of
the pre-trained VGG network as encoder to exploit the representation power of
deep features, and demonstrate the effectiveness of instance normalization for
image dehazing. Our simple yet effective network outperforms the
state-of-the-art methods by a large margin on the benchmark datasets
Physics-Based Generative Adversarial Models for Image Restoration and Beyond
We present an algorithm to directly solve numerous image restoration problems
(e.g., image deblurring, image dehazing, image deraining, etc.). These problems
are highly ill-posed, and the common assumptions for existing methods are
usually based on heuristic image priors. In this paper, we find that these
problems can be solved by generative models with adversarial learning. However,
the basic formulation of generative adversarial networks (GANs) does not
generate realistic images, and some structures of the estimated images are
usually not preserved well. Motivated by an interesting observation that the
estimated results should be consistent with the observed inputs under the
physics models, we propose a physics model constrained learning algorithm so
that it can guide the estimation of the specific task in the conventional GAN
framework. The proposed algorithm is trained in an end-to-end fashion and can
be applied to a variety of image restoration and related low-level vision
problems. Extensive experiments demonstrate that our method performs favorably
against the state-of-the-art algorithms.Comment: IEEE TPAM
Image Dehazing using Bilinear Composition Loss Function
In this paper, we introduce a bilinear composition loss function to address
the problem of image dehazing. Previous methods in image dehazing use a
two-stage approach which first estimate the transmission map followed by clear
image estimation. The drawback of a two-stage method is that it tends to boost
local image artifacts such as noise, aliasing and blocking. This is especially
the case for heavy haze images captured with a low quality device. Our method
is based on convolutional neural networks. Unique in our method is the bilinear
composition loss function which directly model the correlations between
transmission map, clear image, and atmospheric light. This allows errors to be
back-propagated to each sub-network concurrently, while maintaining the
composition constraint to avoid overfitting of each sub-network. We evaluate
the effectiveness of our proposed method using both synthetic and real world
examples. Extensive experiments show that our method outperfoms
state-of-the-art methods especially for haze images with severe noise level and
compressions
Gated Fusion Network for Single Image Dehazing
In this paper, we propose an efficient algorithm to directly restore a clear
image from a hazy input. The proposed algorithm hinges on an end-to-end
trainable neural network that consists of an encoder and a decoder. The encoder
is exploited to capture the context of the derived input images, while the
decoder is employed to estimate the contribution of each input to the final
dehazed result using the learned representations attributed to the encoder. The
constructed network adopts a novel fusion-based strategy which derives three
inputs from an original hazy image by applying White Balance (WB), Contrast
Enhancing (CE), and Gamma Correction (GC). We compute pixel-wise confidence
maps based on the appearance differences between these different inputs to
blend the information of the derived inputs and preserve the regions with
pleasant visibility. The final dehazed image is yielded by gating the important
features of the derived inputs. To train the network, we introduce a
multi-scale approach such that the halo artifacts can be avoided. Extensive
experimental results on both synthetic and real-world images demonstrate that
the proposed algorithm performs favorably against the state-of-the-art
algorithms
- …