14,735 research outputs found
Non-locally Enhanced Encoder-Decoder Network for Single Image De-raining
Single image rain streaks removal has recently witnessed substantial progress
due to the development of deep convolutional neural networks. However, existing
deep learning based methods either focus on the entrance and exit of the
network by decomposing the input image into high and low frequency information
and employing residual learning to reduce the mapping range, or focus on the
introduction of cascaded learning scheme to decompose the task of rain streaks
removal into multi-stages. These methods treat the convolutional neural network
as an encapsulated end-to-end mapping module without deepening into the
rationality and superiority of neural network design. In this paper, we delve
into an effective end-to-end neural network structure for stronger feature
expression and spatial correlation learning. Specifically, we propose a
non-locally enhanced encoder-decoder network framework, which consists of a
pooling indices embedded encoder-decoder network to efficiently learn
increasingly abstract feature representation for more accurate rain streaks
modeling while perfectly preserving the image detail. The proposed
encoder-decoder framework is composed of a series of non-locally enhanced dense
blocks that are designed to not only fully exploit hierarchical features from
all the convolutional layers but also well capture the long-distance
dependencies and structural information. Extensive experiments on synthetic and
real datasets demonstrate that the proposed method can effectively remove
rain-streaks on rainy image of various densities while well preserving the
image details, which achieves significant improvements over the recent
state-of-the-art methods.Comment: Accepted to ACM Multimedia 201
Rain Removal in Traffic Surveillance: Does it Matter?
Varying weather conditions, including rainfall and snowfall, are generally
regarded as a challenge for computer vision algorithms. One proposed solution
to the challenges induced by rain and snowfall is to artificially remove the
rain from images or video using rain removal algorithms. It is the promise of
these algorithms that the rain-removed image frames will improve the
performance of subsequent segmentation and tracking algorithms. However, rain
removal algorithms are typically evaluated on their ability to remove synthetic
rain on a small subset of images. Currently, their behavior is unknown on
real-world videos when integrated with a typical computer vision pipeline. In
this paper, we review the existing rain removal algorithms and propose a new
dataset that consists of 22 traffic surveillance sequences under a broad
variety of weather conditions that all include either rain or snowfall. We
propose a new evaluation protocol that evaluates the rain removal algorithms on
their ability to improve the performance of subsequent segmentation, instance
segmentation, and feature tracking algorithms under rain and snow. If
successful, the de-rained frames of a rain removal algorithm should improve
segmentation performance and increase the number of accurately tracked
features. The results show that a recent single-frame-based rain removal
algorithm increases the segmentation performance by 19.7% on our proposed
dataset, but it eventually decreases the feature tracking performance and
showed mixed results with recent instance segmentation methods. However, the
best video-based rain removal algorithm improves the feature tracking accuracy
by 7.72%.Comment: Published in IEEE Transactions on Intelligent Transportation System
- …