142 research outputs found
Rain Streak Removal for Single Image via Kernel Guided CNN
Rain streak removal is an important issue and has recently been investigated
extensively. Existing methods, especially the newly emerged deep learning
methods, could remove the rain streaks well in many cases. However the
essential factor in the generative procedure of the rain streaks, i.e., the
motion blur, which leads to the line pattern appearances, were neglected by the
deep learning rain streaks approaches and this resulted in over-derain or
under-derain results. In this paper, we propose a novel rain streak removal
approach using a kernel guided convolutional neural network (KGCNN), achieving
the state-of-the-art performance with simple network architectures. We first
model the rain streak interference with its motion blur mechanism. Then, our
framework starts with learning the motion blur kernel, which is determined by
two factors including angle and length, by a plain neural network, denoted as
parameter net, from a patch of the texture component. Then, after a
dimensionality stretching operation, the learned motion blur kernel is
stretched into a degradation map with the same spatial size as the rainy patch.
The stretched degradation map together with the texture patch is subsequently
input into a derain convolutional network, which is a typical ResNet
architecture and trained to output the rain streaks with the guidance of the
learned motion blur kernel. Experiments conducted on extensive synthetic and
real data demonstrate the effectiveness of the proposed method, which preserves
the texture and the contrast while removing the rain streaks
Recurrent Squeeze-and-Excitation Context Aggregation Net for Single Image Deraining
Rain streaks can severely degrade the visibility, which causes many current
computer vision algorithms fail to work. So it is necessary to remove the rain
from images. We propose a novel deep network architecture based on deep
convolutional and recurrent neural networks for single image deraining. As
contextual information is very important for rain removal, we first adopt the
dilated convolutional neural network to acquire large receptive field. To
better fit the rain removal task, we also modify the network. In heavy rain,
rain streaks have various directions and shapes, which can be regarded as the
accumulation of multiple rain streak layers. We assign different alpha-values
to various rain streak layers according to the intensity and transparency by
incorporating the squeeze-and-excitation block. Since rain streak layers
overlap with each other, it is not easy to remove the rain in one stage. So we
further decompose the rain removal into multiple stages. Recurrent neural
network is incorporated to preserve the useful information in previous stages
and benefit the rain removal in later stages. We conduct extensive experiments
on both synthetic and real-world datasets. Our proposed method outperforms the
state-of-the-art approaches under all evaluation metrics. Codes and
supplementary material are available at our project webpage:
https://xialipku.github.io/RESCAN .Comment: Accepted by ECC
Rain Removal By Image Quasi-Sparsity Priors
Rain streaks will inevitably be captured by some outdoor vision systems,
which lowers the image visual quality and also interferes various computer
vision applications. We present a novel rain removal method in this paper,
which consists of two steps, i.e., detection of rain streaks and reconstruction
of the rain-removed image. An accurate detection of rain streaks determines the
quality of the overall performance. To this end, we first detect rain streaks
according to pixel intensities, motivated by the observation that rain streaks
often possess higher intensities compared to other neighboring image
structures. Some mis-detected locations are then refined through a
morphological processing and the principal component analysis (PCA) such that
only locations corresponding to real rain streaks are retained. In the second
step, we separate image gradients into a background layer and a rain streak
layer, thanks to the image quasi-sparsity prior, so that a rain image can be
decomposed into a background layer and a rain layer. We validate the
effectiveness of our method through quantitative and qualitative evaluations.
We show that our method can remove rain (even for some relatively bright rain)
from images robustly and outperforms some state-of-the-art rain removal
algorithms.Comment: 12 pages, 12 figure
Fast Single Image Rain Removal via a Deep Decomposition-Composition Network
Rain effect in images typically is annoying for many multimedia and computer
vision tasks. For removing rain effect from a single image, deep leaning
techniques have been attracting considerable attentions. This paper designs a
novel multi-task leaning architecture in an end-to-end manner to reduce the
mapping range from input to output and boost the performance. Concretely, a
decomposition net is built to split rain images into clean background and rain
layers. Different from previous architectures, our model consists of, besides a
component representing the desired clean image, an extra component for the rain
layer. During the training phase, we further employ a composition structure to
reproduce the input by the separated clean image and rain information for
improving the quality of decomposition. Experimental results on both synthetic
and real images are conducted to reveal the high-quality recovery by our
design, and show its superiority over other state-of-the-art methods.
Furthermore, our design is also applicable to other layer decomposition tasks
like dust removal. More importantly, our method only requires about 50ms,
significantly faster than the competitors, to process a testing image in VGA
resolution on a GTX 1080 GPU, making it attractive for practical use
Structural Residual Learning for Single Image Rain Removal
To alleviate the adverse effect of rain streaks in image processing tasks,
CNN-based single image rain removal methods have been recently proposed.
However, the performance of these deep learning methods largely relies on the
covering range of rain shapes contained in the pre-collected training
rainy-clean image pairs. This makes them easily trapped into the
overfitting-to-the-training-samples issue and cannot finely generalize to
practical rainy images with complex and diverse rain streaks. Against this
generalization issue, this study proposes a new network architecture by
enforcing the output residual of the network possess intrinsic rain structures.
Such a structural residual setting guarantees the rain layer extracted by the
network finely comply with the prior knowledge of general rain streaks, and
thus regulates sound rain shapes capable of being well extracted from rainy
images in both training and predicting stages. Such a general regularization
function naturally leads to both its better training accuracy and testing
generalization capability even for those non-seen rain configurations. Such
superiority is comprehensively substantiated by experiments implemented on
synthetic and real datasets both visually and quantitatively as compared with
current state-of-the-art methods
Robust Optical Flow Estimation in Rainy Scenes
Optical flow estimation in the rainy scenes is challenging due to background
degradation introduced by rain streaks and rain accumulation effects in the
scene. Rain accumulation effect refers to poor visibility of remote objects due
to the intense rainfall. Most existing optical flow methods are erroneous when
applied to rain sequences because the conventional brightness constancy
constraint (BCC) and gradient constancy constraint (GCC) generally break down
in this situation. Based on the observation that the RGB color channels receive
raindrop radiance equally, we introduce a residue channel as a new data
constraint to reduce the effect of rain streaks. To handle rain accumulation,
our method decomposes the image into a piecewise-smooth background layer and a
high-frequency detail layer. It also enforces the BCC on the background layer
only. Results on both synthetic dataset and real images show that our algorithm
outperforms existing methods on different types of rain sequences. To our
knowledge, this is the first optical flow method specifically dealing with
rain.Comment: 9 pages, CVP
An Effective Two-Branch Model-Based Deep Network for Single Image Deraining
Removing rain effects from an image is of importance for various applications
such as autonomous driving, drone piloting, and photo editing. Conventional
methods rely on some heuristics to handcraft various priors to remove or
separate the rain effects from an image. Recent deep learning models are
proposed to learn end-to-end methods to complete this task. However, they often
fail to obtain satisfactory results in many realistic scenarios, especially
when the observed images suffer from heavy rain. Heavy rain brings not only
rain streaks but also haze-like effect caused by the accumulation of tiny
raindrops. Different from the existing deep learning deraining methods that
mainly focus on handling the rain streaks, we design a deep neural network by
incorporating a physical raining image model. Specifically, in the proposed
model, two branches are designed to handle both the rain streaks and haze-like
effects. An additional submodule is jointly trained to finally refine the
results, which give the model flexibility to control the strength of removing
the mist. Extensive experiments on several datasets show that our method
outperforms the state-of-the-art in both objective assessments and visual
quality.Comment: 10 pages, 9 figures, 3 table
Spatial Attentive Single-Image Deraining with a High Quality Real Rain Dataset
Removing rain streaks from a single image has been drawing considerable
attention as rain streaks can severely degrade the image quality and affect the
performance of existing outdoor vision tasks. While recent CNN-based derainers
have reported promising performances, deraining remains an open problem for two
reasons. First, existing synthesized rain datasets have only limited realism,
in terms of modeling real rain characteristics such as rain shape, direction
and intensity. Second, there are no public benchmarks for quantitative
comparisons on real rain images, which makes the current evaluation less
objective. The core challenge is that real world rain/clean image pairs cannot
be captured at the same time. In this paper, we address the single image rain
removal problem in two ways. First, we propose a semi-automatic method that
incorporates temporal priors and human supervision to generate a high-quality
clean image from each input sequence of real rain images. Using this method, we
construct a large-scale dataset of rain/rain-free image pairs
that covers a wide range of natural rain scenes. Second, to better cover the
stochastic distribution of real rain streaks, we propose a novel SPatial
Attentive Network (SPANet) to remove rain streaks in a local-to-global manner.
Extensive experiments demonstrate that our network performs favorably against
the state-of-the-art deraining methods.Comment: Accepted by CVPR'19. Project page:
https://stevewongv.github.io/derain-project.htm
Direction-aware Feature-level Frequency Decomposition for Single Image Deraining
We present a novel direction-aware feature-level frequency decomposition
network for single image deraining. Compared with existing solutions, the
proposed network has three compelling characteristics. First, unlike previous
algorithms, we propose to perform frequency decomposition at feature-level
instead of image-level, allowing both low-frequency maps containing structures
and high-frequency maps containing details to be continuously refined during
the training procedure. Second, we further establish communication channels
between low-frequency maps and high-frequency maps to interactively capture
structures from high-frequency maps and add them back to low-frequency maps
and, simultaneously, extract details from low-frequency maps and send them back
to high-frequency maps, thereby removing rain streaks while preserving more
delicate features in the input image. Third, different from existing algorithms
using convolutional filters consistent in all directions, we propose a
direction-aware filter to capture the direction of rain streaks in order to
more effectively and thoroughly purge the input images of rain streaks. We
extensively evaluate the proposed approach in three representative datasets and
experimental results corroborate our approach consistently outperforms
state-of-the-art deraining algorithms
Removing rain streaks by a linear model
Removing rain streaks from a single image continues to draw attentions today
in outdoor vision systems. In this paper, we present an efficient method to
remove rain streaks. First, the location map of rain pixels needs to be known
as precisely as possible, to which we implement a relatively accurate detection
of rain streaks by utilizing two characteristics of rain streaks.The key
component of our method is to represent the intensity of each detected rain
pixel using a linear model: , where is the observed
intensity of a rain pixel and represents the intensity of the background
(i.e., before rain-affected). To solve and for each detected
rain pixel, we concentrate on a window centered around it and form an
-norm cost function by considering all detected rain pixels within the
window, where the corresponding rain-removed intensity of each detected rain
pixel is estimated by some neighboring non-rain pixels. By minimizing this cost
function, we determine and so as to construct the final
rain-removed pixel intensity. Compared with several state-of-the-art works, our
proposed method can remove rain streaks from a single color image much more
efficiently - it offers not only a better visual quality but also a speed-up of
several times to one degree of magnitude.Comment: 12 pages, 12 figure
- …