14 research outputs found
Deraining and Desnowing Algorithm on Adaptive Tolerance and Dual-tree Complex Wavelet Fusion
Severe weather conditions such as rain and snow often reduce the visual perception quality of the video image system, the traditional methods of deraining and desnowing usually rarely consider adaptive parameters. In order to enhance the effect of video deraining and desnowing, this paper proposes a video deraining and desnowing algorithm based on adaptive tolerance and dual-tree complex wavelet. This algorithm can be widely used in security surveillance, military defense, biological monitoring, remote sensing and other fields. First, this paper introduces the main work of the adaptive tolerance method for the video of dynamic scenes. Second, the algorithm of dual-tree complex wavelet fusion is analyzed and introduced. Using principal component analysis fusion rules to process low-frequency sub-bands, the fusion rule of local energy matching is used to process the high-frequency sub-bands. Finally, this paper used various rain and snow videos to verify the validity and superiority of image reconstruction. Experimental results show that the algorithm has achieved good results in improving the image clarity and restoring the image details obscured by raindrops and snows
Unlocking Low-Light-Rainy Image Restoration by Pairwise Degradation Feature Vector Guidance
Rain in the dark is a common natural phenomenon. Photos captured in such a
condition significantly impact the performance of various nighttime activities,
such as autonomous driving, surveillance systems, and night photography. While
existing methods designed for low-light enhancement or deraining show promising
performance, they have limitations in simultaneously addressing the task of
brightening low light and removing rain. Furthermore, using a cascade approach,
such as ``deraining followed by low-light enhancement'' or vice versa, may lead
to difficult-to-handle rain patterns or excessively blurred and overexposed
images. To overcome these limitations, we propose an end-to-end network called
which can jointly handle low-light enhancement and deraining. Our
network mainly includes a Pairwise Degradation Feature Vector Extraction
Network (P-Net) and a Restoration Network (R-Net). P-Net can learn degradation
feature vectors on the dark and light areas separately, using contrastive
learning to guide the image restoration process. The R-Net is responsible for
restoring the image. We also introduce an effective Fast Fourier - ResNet
Detail Guidance Module (FFR-DG) that initially guides image restoration using
detail image that do not contain degradation information but focus on texture
detail information. Additionally, we contribute a dataset containing synthetic
and real-world low-light-rainy images. Extensive experiments demonstrate that
our outperforms existing methods in both synthetic and complex
real-world scenarios
Rain Removal in Traffic Surveillance: Does it Matter?
Varying weather conditions, including rainfall and snowfall, are generally
regarded as a challenge for computer vision algorithms. One proposed solution
to the challenges induced by rain and snowfall is to artificially remove the
rain from images or video using rain removal algorithms. It is the promise of
these algorithms that the rain-removed image frames will improve the
performance of subsequent segmentation and tracking algorithms. However, rain
removal algorithms are typically evaluated on their ability to remove synthetic
rain on a small subset of images. Currently, their behavior is unknown on
real-world videos when integrated with a typical computer vision pipeline. In
this paper, we review the existing rain removal algorithms and propose a new
dataset that consists of 22 traffic surveillance sequences under a broad
variety of weather conditions that all include either rain or snowfall. We
propose a new evaluation protocol that evaluates the rain removal algorithms on
their ability to improve the performance of subsequent segmentation, instance
segmentation, and feature tracking algorithms under rain and snow. If
successful, the de-rained frames of a rain removal algorithm should improve
segmentation performance and increase the number of accurately tracked
features. The results show that a recent single-frame-based rain removal
algorithm increases the segmentation performance by 19.7% on our proposed
dataset, but it eventually decreases the feature tracking performance and
showed mixed results with recent instance segmentation methods. However, the
best video-based rain removal algorithm improves the feature tracking accuracy
by 7.72%.Comment: Published in IEEE Transactions on Intelligent Transportation System
Towards an Effective and Efficient Transformer for Rain-by-snow Weather Removal
Rain-by-snow weather removal is a specialized task in weather-degraded image
restoration aiming to eliminate coexisting rain streaks and snow particles. In
this paper, we propose RSFormer, an efficient and effective Transformer that
addresses this challenge. Initially, we explore the proximity of convolution
networks (ConvNets) and vision Transformers (ViTs) in hierarchical
architectures and experimentally find they perform approximately at intra-stage
feature learning. On this basis, we utilize a Transformer-like convolution
block (TCB) that replaces the computationally expensive self-attention while
preserving attention characteristics for adapting to input content. We also
demonstrate that cross-stage progression is critical for performance
improvement, and propose a global-local self-attention sampling mechanism
(GLASM) that down-/up-samples features while capturing both global and local
dependencies. Finally, we synthesize two novel rain-by-snow datasets,
RSCityScape and RS100K, to evaluate our proposed RSFormer. Extensive
experiments verify that RSFormer achieves the best trade-off between
performance and time-consumption compared to other restoration methods. For
instance, it outperforms Restormer with a 1.53% reduction in the number of
parameters and a 15.6% reduction in inference time. Datasets, source code and
pre-trained models are available at \url{https://github.com/chdwyb/RSFormer}.Comment: code is available at \url{https://github.com/chdwyb/RSFormer
RCDNet: An Interpretable Rain Convolutional Dictionary Network for Single Image Deraining
As a common weather, rain streaks adversely degrade the image quality. Hence,
removing rains from an image has become an important issue in the field. To
handle such an ill-posed single image deraining task, in this paper, we
specifically build a novel deep architecture, called rain convolutional
dictionary network (RCDNet), which embeds the intrinsic priors of rain streaks
and has clear interpretability. In specific, we first establish a RCD model for
representing rain streaks and utilize the proximal gradient descent technique
to design an iterative algorithm only containing simple operators for solving
the model. By unfolding it, we then build the RCDNet in which every network
module has clear physical meanings and corresponds to each operation involved
in the algorithm. This good interpretability greatly facilitates an easy
visualization and analysis on what happens inside the network and why it works
well in inference process. Moreover, taking into account the domain gap issue
in real scenarios, we further design a novel dynamic RCDNet, where the rain
kernels can be dynamically inferred corresponding to input rainy images and
then help shrink the space for rain layer estimation with few rain maps so as
to ensure a fine generalization performance in the inconsistent scenarios of
rain types between training and testing data. By end-to-end training such an
interpretable network, all involved rain kernels and proximal operators can be
automatically extracted, faithfully characterizing the features of both rain
and clean background layers, and thus naturally lead to better deraining
performance. Comprehensive experiments substantiate the superiority of our
method, especially on its well generality to diverse testing scenarios and good
interpretability for all its modules. Code is available in
\emph{\url{https://github.com/hongwang01/DRCDNet}}