583 research outputs found
Structural Residual Learning for Single Image Rain Removal
To alleviate the adverse effect of rain streaks in image processing tasks,
CNN-based single image rain removal methods have been recently proposed.
However, the performance of these deep learning methods largely relies on the
covering range of rain shapes contained in the pre-collected training
rainy-clean image pairs. This makes them easily trapped into the
overfitting-to-the-training-samples issue and cannot finely generalize to
practical rainy images with complex and diverse rain streaks. Against this
generalization issue, this study proposes a new network architecture by
enforcing the output residual of the network possess intrinsic rain structures.
Such a structural residual setting guarantees the rain layer extracted by the
network finely comply with the prior knowledge of general rain streaks, and
thus regulates sound rain shapes capable of being well extracted from rainy
images in both training and predicting stages. Such a general regularization
function naturally leads to both its better training accuracy and testing
generalization capability even for those non-seen rain configurations. Such
superiority is comprehensively substantiated by experiments implemented on
synthetic and real datasets both visually and quantitatively as compared with
current state-of-the-art methods
Rain Streak Removal for Single Image via Kernel Guided CNN
Rain streak removal is an important issue and has recently been investigated
extensively. Existing methods, especially the newly emerged deep learning
methods, could remove the rain streaks well in many cases. However the
essential factor in the generative procedure of the rain streaks, i.e., the
motion blur, which leads to the line pattern appearances, were neglected by the
deep learning rain streaks approaches and this resulted in over-derain or
under-derain results. In this paper, we propose a novel rain streak removal
approach using a kernel guided convolutional neural network (KGCNN), achieving
the state-of-the-art performance with simple network architectures. We first
model the rain streak interference with its motion blur mechanism. Then, our
framework starts with learning the motion blur kernel, which is determined by
two factors including angle and length, by a plain neural network, denoted as
parameter net, from a patch of the texture component. Then, after a
dimensionality stretching operation, the learned motion blur kernel is
stretched into a degradation map with the same spatial size as the rainy patch.
The stretched degradation map together with the texture patch is subsequently
input into a derain convolutional network, which is a typical ResNet
architecture and trained to output the rain streaks with the guidance of the
learned motion blur kernel. Experiments conducted on extensive synthetic and
real data demonstrate the effectiveness of the proposed method, which preserves
the texture and the contrast while removing the rain streaks
Rain Removal By Image Quasi-Sparsity Priors
Rain streaks will inevitably be captured by some outdoor vision systems,
which lowers the image visual quality and also interferes various computer
vision applications. We present a novel rain removal method in this paper,
which consists of two steps, i.e., detection of rain streaks and reconstruction
of the rain-removed image. An accurate detection of rain streaks determines the
quality of the overall performance. To this end, we first detect rain streaks
according to pixel intensities, motivated by the observation that rain streaks
often possess higher intensities compared to other neighboring image
structures. Some mis-detected locations are then refined through a
morphological processing and the principal component analysis (PCA) such that
only locations corresponding to real rain streaks are retained. In the second
step, we separate image gradients into a background layer and a rain streak
layer, thanks to the image quasi-sparsity prior, so that a rain image can be
decomposed into a background layer and a rain layer. We validate the
effectiveness of our method through quantitative and qualitative evaluations.
We show that our method can remove rain (even for some relatively bright rain)
from images robustly and outperforms some state-of-the-art rain removal
algorithms.Comment: 12 pages, 12 figure
Rain O'er Me: Synthesizing real rain to derain with data distillation
We present a supervised technique for learning to remove rain from images
without using synthetic rain software. The method is based on a two-stage data
distillation approach: 1) A rainy image is first paired with a coarsely
derained version using on a simple filtering technique ("rain-to-clean"). 2)
Then a clean image is randomly matched with the rainy soft-labeled pair.
Through a shared deep neural network, the rain that is removed from the first
image is then added to the clean image to generate a second pair
("clean-to-rain"). The neural network simultaneously learns to map both images
such that high resolution structure in the clean images can inform the
deraining of the rainy images. Demonstrations show that this approach can
address those visual characteristics of rain not easily synthesized by software
in the usual way
Removing rain streaks by a linear model
Removing rain streaks from a single image continues to draw attentions today
in outdoor vision systems. In this paper, we present an efficient method to
remove rain streaks. First, the location map of rain pixels needs to be known
as precisely as possible, to which we implement a relatively accurate detection
of rain streaks by utilizing two characteristics of rain streaks.The key
component of our method is to represent the intensity of each detected rain
pixel using a linear model: , where is the observed
intensity of a rain pixel and represents the intensity of the background
(i.e., before rain-affected). To solve and for each detected
rain pixel, we concentrate on a window centered around it and form an
-norm cost function by considering all detected rain pixels within the
window, where the corresponding rain-removed intensity of each detected rain
pixel is estimated by some neighboring non-rain pixels. By minimizing this cost
function, we determine and so as to construct the final
rain-removed pixel intensity. Compared with several state-of-the-art works, our
proposed method can remove rain streaks from a single color image much more
efficiently - it offers not only a better visual quality but also a speed-up of
several times to one degree of magnitude.Comment: 12 pages, 12 figure
Recurrent Squeeze-and-Excitation Context Aggregation Net for Single Image Deraining
Rain streaks can severely degrade the visibility, which causes many current
computer vision algorithms fail to work. So it is necessary to remove the rain
from images. We propose a novel deep network architecture based on deep
convolutional and recurrent neural networks for single image deraining. As
contextual information is very important for rain removal, we first adopt the
dilated convolutional neural network to acquire large receptive field. To
better fit the rain removal task, we also modify the network. In heavy rain,
rain streaks have various directions and shapes, which can be regarded as the
accumulation of multiple rain streak layers. We assign different alpha-values
to various rain streak layers according to the intensity and transparency by
incorporating the squeeze-and-excitation block. Since rain streak layers
overlap with each other, it is not easy to remove the rain in one stage. So we
further decompose the rain removal into multiple stages. Recurrent neural
network is incorporated to preserve the useful information in previous stages
and benefit the rain removal in later stages. We conduct extensive experiments
on both synthetic and real-world datasets. Our proposed method outperforms the
state-of-the-art approaches under all evaluation metrics. Codes and
supplementary material are available at our project webpage:
https://xialipku.github.io/RESCAN .Comment: Accepted by ECC
Deep joint rain and haze removal from single images
Rain removal from a single image is a challenge which has been studied for a
long time. In this paper, a novel convolutional neural network based on wavelet
and dark channel is proposed. On one hand, we think that rain streaks
correspond to high frequency component of the image. Therefore, haar wavelet
transform is a good choice to separate the rain streaks and background to some
extent. More specifically, the LL subband of a rain image is more inclined to
express the background information, while LH, HL, HH subband tend to represent
the rain streaks and the edges. On the other hand, the accumulation of rain
streaks from long distance makes the rain image look like haze veil. We extract
dark channel of rain image as a feature map in network. By increasing this
mapping between the dark channel of input and output images, we achieve haze
removal in an indirect way. All of the parameters are optimized by
back-propagation. Experiments on both synthetic and real- world datasets reveal
that our method outperforms other state-of- the-art methods from a qualitative
and quantitative perspective.Comment: 6 page
Fast Single Image Rain Removal via a Deep Decomposition-Composition Network
Rain effect in images typically is annoying for many multimedia and computer
vision tasks. For removing rain effect from a single image, deep leaning
techniques have been attracting considerable attentions. This paper designs a
novel multi-task leaning architecture in an end-to-end manner to reduce the
mapping range from input to output and boost the performance. Concretely, a
decomposition net is built to split rain images into clean background and rain
layers. Different from previous architectures, our model consists of, besides a
component representing the desired clean image, an extra component for the rain
layer. During the training phase, we further employ a composition structure to
reproduce the input by the separated clean image and rain information for
improving the quality of decomposition. Experimental results on both synthetic
and real images are conducted to reveal the high-quality recovery by our
design, and show its superiority over other state-of-the-art methods.
Furthermore, our design is also applicable to other layer decomposition tasks
like dust removal. More importantly, our method only requires about 50ms,
significantly faster than the competitors, to process a testing image in VGA
resolution on a GTX 1080 GPU, making it attractive for practical use
Robust Optical Flow Estimation in Rainy Scenes
Optical flow estimation in the rainy scenes is challenging due to background
degradation introduced by rain streaks and rain accumulation effects in the
scene. Rain accumulation effect refers to poor visibility of remote objects due
to the intense rainfall. Most existing optical flow methods are erroneous when
applied to rain sequences because the conventional brightness constancy
constraint (BCC) and gradient constancy constraint (GCC) generally break down
in this situation. Based on the observation that the RGB color channels receive
raindrop radiance equally, we introduce a residue channel as a new data
constraint to reduce the effect of rain streaks. To handle rain accumulation,
our method decomposes the image into a piecewise-smooth background layer and a
high-frequency detail layer. It also enforces the BCC on the background layer
only. Results on both synthetic dataset and real images show that our algorithm
outperforms existing methods on different types of rain sequences. To our
knowledge, this is the first optical flow method specifically dealing with
rain.Comment: 9 pages, CVP
A Model-driven Deep Neural Network for Single Image Rain Removal
Deep learning (DL) methods have achieved state-of-the-art performance in the
task of single image rain removal. Most of current DL architectures, however,
are still lack of sufficient interpretability and not fully integrated with
physical structures inside general rain streaks. To this issue, in this paper,
we propose a model-driven deep neural network for the task, with fully
interpretable network structures. Specifically, based on the convolutional
dictionary learning mechanism for representing rain, we propose a novel single
image deraining model and utilize the proximal gradient descent technique to
design an iterative algorithm only containing simple operators for solving the
model. Such a simple implementation scheme facilitates us to unfold it into a
new deep network architecture, called rain convolutional dictionary network
(RCDNet), with almost every network module one-to-one corresponding to each
operation involved in the algorithm. By end-to-end training the proposed
RCDNet, all the rain kernels and proximal operators can be automatically
extracted, faithfully characterizing the features of both rain and clean
background layers, and thus naturally lead to its better deraining performance,
especially in real scenarios. Comprehensive experiments substantiate the
superiority of the proposed network, especially its well generality to diverse
testing scenarios and good interpretability for all its modules, as compared
with state-of-the-arts both visually and quantitatively. The source codes are
available at \url{https://github.com/hongwang01/RCDNet}
- …