1,860 research outputs found
DR-Net: Transmission Steered Single Image Dehazing Network with Weakly Supervised Refinement
Despite the recent progress in image dehazing, several problems remain
largely unsolved such as robustness for varying scenes, the visual quality of
reconstructed images, and effectiveness and flexibility for applications. To
tackle these problems, we propose a new deep network architecture for single
image dehazing called DR-Net. Our model consists of three main subnetworks: a
transmission prediction network that predicts transmission map for the input
image, a haze removal network that reconstructs latent image steered by the
transmission map, and a refinement network that enhances the details and color
properties of the dehazed result via weakly supervised learning. Compared to
previous methods, our method advances in three aspects: (i) pure data-driven
model; (ii) the end-to-end system; (iii) superior robustness, accuracy, and
applicability. Extensive experiments demonstrate that our DR-Net outperforms
the state-of-the-art methods on both synthetic and real images in qualitative
and quantitative metrics. Additionally, the utility of DR-Net has been
illustrated by its potential usage in several important computer vision tasks.Comment: 8 pages, 8 figures, submitted to CVPR 201
An All-in-One Network for Dehazing and Beyond
This paper proposes an image dehazing model built with a convolutional neural
network (CNN), called All-in-One Dehazing Network (AOD-Net). It is designed
based on a re-formulated atmospheric scattering model. Instead of estimating
the transmission matrix and the atmospheric light separately as most previous
models did, AOD-Net directly generates the clean image through a light-weight
CNN. Such a novel end-to-end design makes it easy to embed AOD-Net into other
deep models, e.g., Faster R-CNN, for improving high-level task performance on
hazy images. Experimental results on both synthesized and natural hazy image
datasets demonstrate our superior performance than the state-of-the-art in
terms of PSNR, SSIM and the subjective visual quality. Furthermore, when
concatenating AOD-Net with Faster R-CNN and training the joint pipeline from
end to end, we witness a large improvement of the object detection performance
on hazy images
A Cascaded Convolutional Neural Network for Single Image Dehazing
Images captured under outdoor scenes usually suffer from low contrast and
limited visibility due to suspended atmospheric particles, which directly
affects the quality of photos. Despite numerous image dehazing methods have
been proposed, effective hazy image restoration remains a challenging problem.
Existing learning-based methods usually predict the medium transmission by
Convolutional Neural Networks (CNNs), but ignore the key global atmospheric
light. Different from previous learning-based methods, we propose a flexible
cascaded CNN for single hazy image restoration, which considers the medium
transmission and global atmospheric light jointly by two task-driven
subnetworks. Specifically, the medium transmission estimation subnetwork is
inspired by the densely connected CNN while the global atmospheric light
estimation subnetwork is a light-weight CNN. Besides, these two subnetworks are
cascaded by sharing the common features. Finally, with the estimated model
parameters, the haze-free image is obtained by the atmospheric scattering model
inversion, which achieves more accurate and effective restoration performance.
Qualitatively and quantitatively experimental results on the synthetic and
real-world hazy images demonstrate that the proposed method effectively removes
haze from such images, and outperforms several state-of-the-art dehazing
methods.Comment: This manuscript is accepted by IEEE ACCES
Densely Connected Pyramid Dehazing Network
We propose a new end-to-end single image dehazing method, called Densely
Connected Pyramid Dehazing Network (DCPDN), which can jointly learn the
transmission map, atmospheric light and dehazing all together. The end-to-end
learning is achieved by directly embedding the atmospheric scattering model
into the network, thereby ensuring that the proposed method strictly follows
the physics-driven scattering model for dehazing. Inspired by the dense network
that can maximize the information flow along features from different levels, we
propose a new edge-preserving densely connected encoder-decoder structure with
multi-level pyramid pooling module for estimating the transmission map. This
network is optimized using a newly introduced edge-preserving loss function. To
further incorporate the mutual structural information between the estimated
transmission map and the dehazed result, we propose a joint-discriminator based
on generative adversarial network framework to decide whether the corresponding
dehazed image and the estimated transmission map are real or fake. An ablation
study is conducted to demonstrate the effectiveness of each module evaluated at
both estimated transmission map and dehazed result. Extensive experiments
demonstrate that the proposed method achieves significant improvements over the
state-of-the-art methods. Code will be made available at:
https://github.com/hezhangsprinte
A Smoke Removal Method for Laparoscopic Images
In laparoscopic surgery, image quality can be severely degraded by surgical
smoke, which not only introduces error for the image processing (used in image
guided surgery), but also reduces the visibility of the surgeons. In this
paper, we propose to enhance the laparoscopic images by decomposing them into
unwanted smoke part and enhanced part using a variational approach. The
proposed method relies on the observation that smoke has low contrast and low
inter-channel differences. A cost function is defined based on this prior
knowledge and is solved using an augmented Lagrangian method. The obtained
unwanted smoke component is then subtracted from the original degraded image,
resulting in the enhanced image. The obtained quantitative scores in terms of
FADE, JNBM and RE metrics show that our proposed method performs rather well.
Furthermore, the qualitative visual inspection of the results show that it
removes smoke effectively from the laparoscopic images
Joint Transmission Map Estimation and Dehazing using Deep Networks
Single image haze removal is an extremely challenging problem due to its
inherent ill-posed nature. Several prior-based and learning-based methods have
been proposed in the literature to solve this problem and they have achieved
superior results. However, most of the existing methods assume constant
atmospheric light model and tend to follow a two-step procedure involving
prior-based methods for estimating transmission map followed by calculation of
dehazed image using the closed form solution. In this paper, we relax the
constant atmospheric light assumption and propose a novel unified single image
dehazing network that jointly estimates the transmission map and performs
dehazing. In other words, our new approach provides an end-to-end learning
framework, where the inherent transmission map and dehazed result are learned
directly from the loss function. Extensive experiments on synthetic and real
datasets with challenging hazy images demonstrate that the proposed method
achieves significant improvements over the state-of-the-art methods.Comment: This paper has been accepted in IEEE-TCSV
Unsupervised Single Image Dehazing Using Dark Channel Prior Loss
Single image dehazing is a critical stage in many modern-day autonomous
vision applications. Early prior-based methods often involved a time-consuming
minimization of a hand-crafted energy function. Recent learning-based
approaches utilize the representational power of deep neural networks (DNNs) to
learn the underlying transformation between hazy and clear images. Due to
inherent limitations in collecting matching clear and hazy images, these
methods resort to training on synthetic data; constructed from indoor images
and corresponding depth information. This may result in a possible domain shift
when treating outdoor scenes. We propose a completely unsupervised method of
training via minimization of the well-known, Dark Channel Prior (DCP) energy
function. Instead of feeding the network with synthetic data, we solely use
real-world outdoor images and tune the network's parameters by directly
minimizing the DCP. Although our "Deep DCP" technique can be regarded as a fast
approximator of DCP, it actually improves its results significantly. This
suggests an additional regularization obtained via the network and learning
process. Experiments show that our method performs on par with large-scale
supervised methods
Fast Single Image Dehazing via Multilevel Wavelet Transform based Optimization
The quality of images captured in outdoor environments can be affected by
poor weather conditions such as fog, dust, and atmospheric scattering of other
particles. This problem can bring extra challenges to high-level computer
vision tasks like image segmentation and object detection. However, previous
studies on image dehazing suffer from a huge computational workload and
corruption of the original image, such as over-saturation and halos. In this
paper, we present a novel image dehazing approach based on the optical model
for haze images and regularized optimization. Specifically, we convert the
non-convex, bilinear problem concerning the unknown haze-free image and light
transmission distribution to a convex, linear optimization problem by
estimating the atmosphere light constant. Our method is further accelerated by
introducing a multilevel Haar wavelet transform. The optimization, instead, is
applied to the low frequency sub-band decomposition of the original image. This
dimension reduction significantly improves the processing speed of our method
and exhibits the potential for real-time applications. Experimental results
show that our approach outperforms state-of-the-art dehazing algorithms in
terms of both image reconstruction quality and computational efficiency. For
implementation details, source code can be publicly accessed via
http://github.com/JiaxiHe/Image-and-Video-Dehazing.Comment: 23 pages, 13 figure
"Double-DIP": Unsupervised Image Decomposition via Coupled Deep-Image-Priors
Many seemingly unrelated computer vision tasks can be viewed as a special
case of image decomposition into separate layers. For example, image
segmentation (separation into foreground and background layers); transparent
layer separation (into reflection and transmission layers); Image dehazing
(separation into a clear image and a haze map), and more. In this paper we
propose a unified framework for unsupervised layer decomposition of a single
image, based on coupled "Deep-image-Prior" (DIP) networks. It was shown
[Ulyanov et al] that the structure of a single DIP generator network is
sufficient to capture the low-level statistics of a single image. We show that
coupling multiple such DIPs provides a powerful tool for decomposing images
into their basic components, for a wide variety of applications. This
capability stems from the fact that the internal statistics of a mixture of
layers is more complex than the statistics of each of its individual
components. We show the power of this approach for Image-Dehazing, Fg/Bg
Segmentation, Watermark-Removal, Transparency Separation in images and video,
and more. These capabilities are achieved in a totally unsupervised way, with
no training examples other than the input image/video itself.Comment: Project page: http://www.wisdom.weizmann.ac.il/~vision/DoubleDIP
Progressive Feature Fusion Network for Realistic Image Dehazing
Single image dehazing is a challenging ill-posed restoration problem. Various
prior-based and learning-based methods have been proposed. Most of them follow
a classic atmospheric scattering model which is an elegant simplified physical
model based on the assumption of single-scattering and homogeneous atmospheric
medium. The formulation of haze in realistic environment is more complicated.
In this paper, we propose to take its essential mechanism as "black box", and
focus on learning an input-adaptive trainable end-to-end dehazing model. An
U-Net like encoder-decoder deep network via progressive feature fusions has
been proposed to directly learn highly nonlinear transformation function from
observed hazy image to haze-free ground-truth. The proposed network is
evaluated on two public image dehazing benchmarks. The experiments demonstrate
that it can achieve superior performance when compared with popular
state-of-the-art methods. With efficient GPU memory usage, it can
satisfactorily recover ultra high definition hazed image up to 4K resolution,
which is unaffordable by many deep learning based dehazing algorithms.Comment: 14 pages, 7 figures, 1 tables, accepted by ACCV201
- …