13,135 research outputs found
A Smoke Removal Method for Laparoscopic Images
In laparoscopic surgery, image quality can be severely degraded by surgical
smoke, which not only introduces error for the image processing (used in image
guided surgery), but also reduces the visibility of the surgeons. In this
paper, we propose to enhance the laparoscopic images by decomposing them into
unwanted smoke part and enhanced part using a variational approach. The
proposed method relies on the observation that smoke has low contrast and low
inter-channel differences. A cost function is defined based on this prior
knowledge and is solved using an augmented Lagrangian method. The obtained
unwanted smoke component is then subtracted from the original degraded image,
resulting in the enhanced image. The obtained quantitative scores in terms of
FADE, JNBM and RE metrics show that our proposed method performs rather well.
Furthermore, the qualitative visual inspection of the results show that it
removes smoke effectively from the laparoscopic images
Progressive Feature Fusion Network for Realistic Image Dehazing
Single image dehazing is a challenging ill-posed restoration problem. Various
prior-based and learning-based methods have been proposed. Most of them follow
a classic atmospheric scattering model which is an elegant simplified physical
model based on the assumption of single-scattering and homogeneous atmospheric
medium. The formulation of haze in realistic environment is more complicated.
In this paper, we propose to take its essential mechanism as "black box", and
focus on learning an input-adaptive trainable end-to-end dehazing model. An
U-Net like encoder-decoder deep network via progressive feature fusions has
been proposed to directly learn highly nonlinear transformation function from
observed hazy image to haze-free ground-truth. The proposed network is
evaluated on two public image dehazing benchmarks. The experiments demonstrate
that it can achieve superior performance when compared with popular
state-of-the-art methods. With efficient GPU memory usage, it can
satisfactorily recover ultra high definition hazed image up to 4K resolution,
which is unaffordable by many deep learning based dehazing algorithms.Comment: 14 pages, 7 figures, 1 tables, accepted by ACCV201
Does Haze Removal Help CNN-based Image Classification?
Hazy images are common in real scenarios and many dehazing methods have been
developed to automatically remove the haze from images. Typically, the goal of
image dehazing is to produce clearer images from which human vision can better
identify the object and structural details present in the images. When the
ground-truth haze-free image is available for a hazy image, quantitative
evaluation of image dehazing is usually based on objective metrics, such as
Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM). However, in
many applications, large-scale images are collected not for visual examination
by human. Instead, they are used for many high-level vision tasks, such as
automatic classification, recognition and categorization. One fundamental
problem here is whether various dehazing methods can produce clearer images
that can help improve the performance of the high-level tasks. In this paper,
we empirically study this problem in the important task of image classification
by using both synthetic and real hazy image datasets. From the experimental
results, we find that the existing image-dehazing methods cannot improve much
the image-classification performance and sometimes even reduce the
image-classification performance
Effects of Image Degradations to CNN-based Image Classification
Just like many other topics in computer vision, image classification has
achieved significant progress recently by using deep-learning neural networks,
especially the Convolutional Neural Networks (CNN). Most of the existing works
are focused on classifying very clear natural images, evidenced by the widely
used image databases such as Caltech-256, PASCAL VOCs and ImageNet. However, in
many real applications, the acquired images may contain certain degradations
that lead to various kinds of blurring, noise, and distortions. One important
and interesting problem is the effect of such degradations to the performance
of CNN-based image classification. More specifically, we wonder whether
image-classification performance drops with each kind of degradation, whether
this drop can be avoided by including degraded images into training, and
whether existing computer vision algorithms that attempt to remove such
degradations can help improve the image-classification performance. In this
paper, we empirically study this problem for four kinds of degraded images --
hazy images, underwater images, motion-blurred images and fish-eye images. For
this study, we synthesize a large number of such degraded images by applying
respective physical models to the clear natural images and collect a new hazy
image dataset from the Internet. We expect this work can draw more interests
from the community to study the classification of degraded images
The Effectiveness of Instance Normalization: a Strong Baseline for Single Image Dehazing
We propose a novel deep neural network architecture for the challenging
problem of single image dehazing, which aims to recover the clear image from a
degraded hazy image. Instead of relying on hand-crafted image priors or
explicitly estimating the components of the widely used atmospheric scattering
model, our end-to-end system directly generates the clear image from an input
hazy image. The proposed network has an encoder-decoder architecture with skip
connections and instance normalization. We adopt the convolutional layers of
the pre-trained VGG network as encoder to exploit the representation power of
deep features, and demonstrate the effectiveness of instance normalization for
image dehazing. Our simple yet effective network outperforms the
state-of-the-art methods by a large margin on the benchmark datasets
An All-in-One Network for Dehazing and Beyond
This paper proposes an image dehazing model built with a convolutional neural
network (CNN), called All-in-One Dehazing Network (AOD-Net). It is designed
based on a re-formulated atmospheric scattering model. Instead of estimating
the transmission matrix and the atmospheric light separately as most previous
models did, AOD-Net directly generates the clean image through a light-weight
CNN. Such a novel end-to-end design makes it easy to embed AOD-Net into other
deep models, e.g., Faster R-CNN, for improving high-level task performance on
hazy images. Experimental results on both synthesized and natural hazy image
datasets demonstrate our superior performance than the state-of-the-art in
terms of PSNR, SSIM and the subjective visual quality. Furthermore, when
concatenating AOD-Net with Faster R-CNN and training the joint pipeline from
end to end, we witness a large improvement of the object detection performance
on hazy images
Real-world Underwater Enhancement: Challenges, Benchmarks, and Solutions
Underwater image enhancement is such an important low-level vision task with
many applications that numerous algorithms have been proposed in recent years.
These algorithms developed upon various assumptions demonstrate successes from
various aspects using different data sets and different metrics. In this work,
we setup an undersea image capturing system, and construct a large-scale
Real-world Underwater Image Enhancement (RUIE) data set divided into three
subsets. The three subsets target at three challenging aspects for enhancement,
i.e., image visibility quality, color casts, and higher-level
detection/classification, respectively. We conduct extensive and systematic
experiments on RUIE to evaluate the effectiveness and limitations of various
algorithms to enhance visibility and correct color casts on images with
hierarchical categories of degradation. Moreover, underwater image enhancement
in practice usually serves as a preprocessing step for mid-level and high-level
vision tasks. We thus exploit the object detection performance on enhanced
images as a brand new task-specific evaluation criterion. The findings from
these evaluations not only confirm what is commonly believed, but also suggest
promising solutions and new directions for visibility enhancement, color
correction, and object detection on real-world underwater images.Comment: arXiv admin note: text overlap with arXiv:1712.04143 by other author
Cycle-Dehaze: Enhanced CycleGAN for Single Image Dehazing
In this paper, we present an end-to-end network, called Cycle-Dehaze, for
single image dehazing problem, which does not require pairs of hazy and
corresponding ground truth images for training. That is, we train the network
by feeding clean and hazy images in an unpaired manner. Moreover, the proposed
approach does not rely on estimation of the atmospheric scattering model
parameters. Our method enhances CycleGAN formulation by combining
cycle-consistency and perceptual losses in order to improve the quality of
textural information recovery and generate visually better haze-free images.
Typically, deep learning models for dehazing take low resolution images as
input and produce low resolution outputs. However, in the NTIRE 2018 challenge
on single image dehazing, high resolution images were provided. Therefore, we
apply bicubic downscaling. After obtaining low-resolution outputs from the
network, we utilize the Laplacian pyramid to upscale the output images to the
original resolution. We conduct experiments on NYU-Depth, I-HAZE, and O-HAZE
datasets. Extensive experiments demonstrate that the proposed approach improves
CycleGAN method both quantitatively and qualitatively.Comment: Accepted at CVPRW: NTIRE 201
Benchmarking Single Image Dehazing and Beyond
We present a comprehensive study and evaluation of existing single image
dehazing algorithms, using a new large-scale benchmark consisting of both
synthetic and real-world hazy images, called REalistic Single Image DEhazing
(RESIDE). RESIDE highlights diverse data sources and image contents, and is
divided into five subsets, each serving different training or evaluation
purposes. We further provide a rich variety of criteria for dehazing algorithm
evaluation, ranging from full-reference metrics, to no-reference metrics, to
subjective evaluation and the novel task-driven evaluation. Experiments on
RESIDE shed light on the comparisons and limitations of state-of-the-art
dehazing algorithms, and suggest promising future directions.Comment: IEEE Transactions on Image Processing(TIP 2019
Fast Single Image Dehazing via Multilevel Wavelet Transform based Optimization
The quality of images captured in outdoor environments can be affected by
poor weather conditions such as fog, dust, and atmospheric scattering of other
particles. This problem can bring extra challenges to high-level computer
vision tasks like image segmentation and object detection. However, previous
studies on image dehazing suffer from a huge computational workload and
corruption of the original image, such as over-saturation and halos. In this
paper, we present a novel image dehazing approach based on the optical model
for haze images and regularized optimization. Specifically, we convert the
non-convex, bilinear problem concerning the unknown haze-free image and light
transmission distribution to a convex, linear optimization problem by
estimating the atmosphere light constant. Our method is further accelerated by
introducing a multilevel Haar wavelet transform. The optimization, instead, is
applied to the low frequency sub-band decomposition of the original image. This
dimension reduction significantly improves the processing speed of our method
and exhibits the potential for real-time applications. Experimental results
show that our approach outperforms state-of-the-art dehazing algorithms in
terms of both image reconstruction quality and computational efficiency. For
implementation details, source code can be publicly accessed via
http://github.com/JiaxiHe/Image-and-Video-Dehazing.Comment: 23 pages, 13 figure
- …