4 research outputs found
Spatial Attentive Single-Image Deraining with a High Quality Real Rain Dataset
Removing rain streaks from a single image has been drawing considerable
attention as rain streaks can severely degrade the image quality and affect the
performance of existing outdoor vision tasks. While recent CNN-based derainers
have reported promising performances, deraining remains an open problem for two
reasons. First, existing synthesized rain datasets have only limited realism,
in terms of modeling real rain characteristics such as rain shape, direction
and intensity. Second, there are no public benchmarks for quantitative
comparisons on real rain images, which makes the current evaluation less
objective. The core challenge is that real world rain/clean image pairs cannot
be captured at the same time. In this paper, we address the single image rain
removal problem in two ways. First, we propose a semi-automatic method that
incorporates temporal priors and human supervision to generate a high-quality
clean image from each input sequence of real rain images. Using this method, we
construct a large-scale dataset of rain/rain-free image pairs
that covers a wide range of natural rain scenes. Second, to better cover the
stochastic distribution of real rain streaks, we propose a novel SPatial
Attentive Network (SPANet) to remove rain streaks in a local-to-global manner.
Extensive experiments demonstrate that our network performs favorably against
the state-of-the-art deraining methods.Comment: Accepted by CVPR'19. Project page:
https://stevewongv.github.io/derain-project.htm
Rain Streak Removal for Single Image via Kernel Guided CNN
Rain streak removal is an important issue and has recently been investigated
extensively. Existing methods, especially the newly emerged deep learning
methods, could remove the rain streaks well in many cases. However the
essential factor in the generative procedure of the rain streaks, i.e., the
motion blur, which leads to the line pattern appearances, were neglected by the
deep learning rain streaks approaches and this resulted in over-derain or
under-derain results. In this paper, we propose a novel rain streak removal
approach using a kernel guided convolutional neural network (KGCNN), achieving
the state-of-the-art performance with simple network architectures. We first
model the rain streak interference with its motion blur mechanism. Then, our
framework starts with learning the motion blur kernel, which is determined by
two factors including angle and length, by a plain neural network, denoted as
parameter net, from a patch of the texture component. Then, after a
dimensionality stretching operation, the learned motion blur kernel is
stretched into a degradation map with the same spatial size as the rainy patch.
The stretched degradation map together with the texture patch is subsequently
input into a derain convolutional network, which is a typical ResNet
architecture and trained to output the rain streaks with the guidance of the
learned motion blur kernel. Experiments conducted on extensive synthetic and
real data demonstrate the effectiveness of the proposed method, which preserves
the texture and the contrast while removing the rain streaks
MBA-RainGAN: Multi-branch Attention Generative Adversarial Network for Mixture of Rain Removal from Single Images
Rain severely hampers the visibility of scene objects when images are
captured through glass in heavily rainy days. We observe three intriguing
phenomenons that, 1) rain is a mixture of raindrops, rain streaks and rainy
haze; 2) the depth from the camera determines the degrees of object visibility,
where objects nearby and faraway are visually blocked by rain streaks and rainy
haze, respectively; and 3) raindrops on the glass randomly affect the object
visibility of the whole image space. We for the first time consider that, the
overall visibility of objects is determined by the mixture of rain (MOR).
However, existing solutions and established datasets lack full consideration of
the MOR. In this work, we first formulate a new rain imaging model; by then, we
enrich the popular RainCityscapes by considering raindrops, named
RainCityscapes++. Furthermore, we propose a multi-branch attention generative
adversarial network (termed an MBA-RainGAN) to fully remove the MOR. The
experiment shows clear visual and numerical improvements of our approach over
the state-of-the-arts on RainCityscapes++. The code and dataset will be
available
Deep Dense Multi-scale Network for Snow Removal Using Semantic and Geometric Priors
Images captured in snowy days suffer from noticeable degradation of scene
visibility, which degenerates the performance of current vision-based
intelligent systems. Removing snow from images thus is an important topic in
computer vision. In this paper, we propose a Deep Dense Multi-Scale Network
(\textbf{DDMSNet}) for snow removal by exploiting semantic and geometric
priors. As images captured in outdoor often share similar scenes and their
visibility varies with depth from camera, such semantic and geometric
information provides a strong prior for snowy image restoration. We incorporate
the semantic and geometric maps as input and learn the semantic-aware and
geometry-aware representation to remove snow. In particular, we first create a
coarse network to remove snow from the input images. Then, the coarsely
desnowed images are fed into another network to obtain the semantic and
geometric labels. Finally, we design a DDMSNet to learn semantic-aware and
geometry-aware representation via a self-attention mechanism to produce the
final clean images. Experiments evaluated on public synthetic and real-world
snowy images verify the superiority of the proposed method, offering better
results both quantitatively and qualitatively