75 research outputs found
Single Image Dehazing through Improved Atmospheric Light Estimation
Image contrast enhancement for outdoor vision is important for smart car
auxiliary transport systems. The video frames captured in poor weather
conditions are often characterized by poor visibility. Most image dehazing
algorithms consider to use a hard threshold assumptions or user input to
estimate atmospheric light. However, the brightest pixels sometimes are objects
such as car lights or streetlights, especially for smart car auxiliary
transport systems. Simply using a hard threshold may cause a wrong estimation.
In this paper, we propose a single optimized image dehazing method that
estimates atmospheric light efficiently and removes haze through the estimation
of a semi-globally adaptive filter. The enhanced images are characterized with
little noise and good exposure in dark regions. The textures and edges of the
processed images are also enhanced significantly.Comment: Multimedia Tools and Applications (2015
Cycle-Dehaze: Enhanced CycleGAN for Single Image Dehazing
In this paper, we present an end-to-end network, called Cycle-Dehaze, for
single image dehazing problem, which does not require pairs of hazy and
corresponding ground truth images for training. That is, we train the network
by feeding clean and hazy images in an unpaired manner. Moreover, the proposed
approach does not rely on estimation of the atmospheric scattering model
parameters. Our method enhances CycleGAN formulation by combining
cycle-consistency and perceptual losses in order to improve the quality of
textural information recovery and generate visually better haze-free images.
Typically, deep learning models for dehazing take low resolution images as
input and produce low resolution outputs. However, in the NTIRE 2018 challenge
on single image dehazing, high resolution images were provided. Therefore, we
apply bicubic downscaling. After obtaining low-resolution outputs from the
network, we utilize the Laplacian pyramid to upscale the output images to the
original resolution. We conduct experiments on NYU-Depth, I-HAZE, and O-HAZE
datasets. Extensive experiments demonstrate that the proposed approach improves
CycleGAN method both quantitatively and qualitatively.Comment: Accepted at CVPRW: NTIRE 201
Image Dehazing using Bilinear Composition Loss Function
In this paper, we introduce a bilinear composition loss function to address
the problem of image dehazing. Previous methods in image dehazing use a
two-stage approach which first estimate the transmission map followed by clear
image estimation. The drawback of a two-stage method is that it tends to boost
local image artifacts such as noise, aliasing and blocking. This is especially
the case for heavy haze images captured with a low quality device. Our method
is based on convolutional neural networks. Unique in our method is the bilinear
composition loss function which directly model the correlations between
transmission map, clear image, and atmospheric light. This allows errors to be
back-propagated to each sub-network concurrently, while maintaining the
composition constraint to avoid overfitting of each sub-network. We evaluate
the effectiveness of our proposed method using both synthetic and real world
examples. Extensive experiments show that our method outperfoms
state-of-the-art methods especially for haze images with severe noise level and
compressions
Gated Fusion Network for Single Image Dehazing
In this paper, we propose an efficient algorithm to directly restore a clear
image from a hazy input. The proposed algorithm hinges on an end-to-end
trainable neural network that consists of an encoder and a decoder. The encoder
is exploited to capture the context of the derived input images, while the
decoder is employed to estimate the contribution of each input to the final
dehazed result using the learned representations attributed to the encoder. The
constructed network adopts a novel fusion-based strategy which derives three
inputs from an original hazy image by applying White Balance (WB), Contrast
Enhancing (CE), and Gamma Correction (GC). We compute pixel-wise confidence
maps based on the appearance differences between these different inputs to
blend the information of the derived inputs and preserve the regions with
pleasant visibility. The final dehazed image is yielded by gating the important
features of the derived inputs. To train the network, we introduce a
multi-scale approach such that the halo artifacts can be avoided. Extensive
experimental results on both synthetic and real-world images demonstrate that
the proposed algorithm performs favorably against the state-of-the-art
algorithms
DR-Net: Transmission Steered Single Image Dehazing Network with Weakly Supervised Refinement
Despite the recent progress in image dehazing, several problems remain
largely unsolved such as robustness for varying scenes, the visual quality of
reconstructed images, and effectiveness and flexibility for applications. To
tackle these problems, we propose a new deep network architecture for single
image dehazing called DR-Net. Our model consists of three main subnetworks: a
transmission prediction network that predicts transmission map for the input
image, a haze removal network that reconstructs latent image steered by the
transmission map, and a refinement network that enhances the details and color
properties of the dehazed result via weakly supervised learning. Compared to
previous methods, our method advances in three aspects: (i) pure data-driven
model; (ii) the end-to-end system; (iii) superior robustness, accuracy, and
applicability. Extensive experiments demonstrate that our DR-Net outperforms
the state-of-the-art methods on both synthetic and real images in qualitative
and quantitative metrics. Additionally, the utility of DR-Net has been
illustrated by its potential usage in several important computer vision tasks.Comment: 8 pages, 8 figures, submitted to CVPR 201
Haze Visibility Enhancement: A Survey and Quantitative Benchmarking
This paper provides a comprehensive survey of methods dealing with visibility
enhancement of images taken in hazy or foggy scenes. The survey begins with
discussing the optical models of atmospheric scattering media and image
formation. This is followed by a survey of existing methods, which are grouped
to multiple image methods, polarizing filters based methods, methods with known
depth, and single-image methods. We also provide a benchmark of a number of
well known single-image methods, based on a recent dataset provided by Fattal
and our newly generated scattering media dataset that contains ground truth
images for quantitative evaluation. To our knowledge, this is the first
benchmark using numerical metrics to evaluate dehazing techniques. This
benchmark allows us to objectively compare the results of existing methods and
to better identify the strengths and limitations of each method
An All-in-One Network for Dehazing and Beyond
This paper proposes an image dehazing model built with a convolutional neural
network (CNN), called All-in-One Dehazing Network (AOD-Net). It is designed
based on a re-formulated atmospheric scattering model. Instead of estimating
the transmission matrix and the atmospheric light separately as most previous
models did, AOD-Net directly generates the clean image through a light-weight
CNN. Such a novel end-to-end design makes it easy to embed AOD-Net into other
deep models, e.g., Faster R-CNN, for improving high-level task performance on
hazy images. Experimental results on both synthesized and natural hazy image
datasets demonstrate our superior performance than the state-of-the-art in
terms of PSNR, SSIM and the subjective visual quality. Furthermore, when
concatenating AOD-Net with Faster R-CNN and training the joint pipeline from
end to end, we witness a large improvement of the object detection performance
on hazy images
A Cascaded Convolutional Neural Network for Single Image Dehazing
Images captured under outdoor scenes usually suffer from low contrast and
limited visibility due to suspended atmospheric particles, which directly
affects the quality of photos. Despite numerous image dehazing methods have
been proposed, effective hazy image restoration remains a challenging problem.
Existing learning-based methods usually predict the medium transmission by
Convolutional Neural Networks (CNNs), but ignore the key global atmospheric
light. Different from previous learning-based methods, we propose a flexible
cascaded CNN for single hazy image restoration, which considers the medium
transmission and global atmospheric light jointly by two task-driven
subnetworks. Specifically, the medium transmission estimation subnetwork is
inspired by the densely connected CNN while the global atmospheric light
estimation subnetwork is a light-weight CNN. Besides, these two subnetworks are
cascaded by sharing the common features. Finally, with the estimated model
parameters, the haze-free image is obtained by the atmospheric scattering model
inversion, which achieves more accurate and effective restoration performance.
Qualitatively and quantitatively experimental results on the synthetic and
real-world hazy images demonstrate that the proposed method effectively removes
haze from such images, and outperforms several state-of-the-art dehazing
methods.Comment: This manuscript is accepted by IEEE ACCES
Benchmarking Single Image Dehazing and Beyond
We present a comprehensive study and evaluation of existing single image
dehazing algorithms, using a new large-scale benchmark consisting of both
synthetic and real-world hazy images, called REalistic Single Image DEhazing
(RESIDE). RESIDE highlights diverse data sources and image contents, and is
divided into five subsets, each serving different training or evaluation
purposes. We further provide a rich variety of criteria for dehazing algorithm
evaluation, ranging from full-reference metrics, to no-reference metrics, to
subjective evaluation and the novel task-driven evaluation. Experiments on
RESIDE shed light on the comparisons and limitations of state-of-the-art
dehazing algorithms, and suggest promising future directions.Comment: IEEE Transactions on Image Processing(TIP 2019
Joint Defogging and Demosaicking
Image defogging is a technique used extensively for enhancing visual quality
of images in bad weather condition. Even though defogging algorithms have been
well studied, defogging performance is degraded by demosaicking artifacts and
sensor noise amplification in distant scenes. In order to improve visual
quality of restored images, we propose a novel approach to perform defogging
and demosaicking simultaneously. We conclude that better defogging performance
with fewer artifacts can be achieved when a defogging algorithm is combined
with a demosaicking algorithm simultaneously. We also demonstrate that the
proposed joint algorithm has the benefit of suppressing noise amplification in
distant scene. In addition, we validate our theoretical analysis and
observations for both synthesized datasets with ground truth fog-free images
and natural scene datasets captured in a raw format
- …