37 research outputs found
Unlocking Low-Light-Rainy Image Restoration by Pairwise Degradation Feature Vector Guidance
Rain in the dark is a common natural phenomenon. Photos captured in such a
condition significantly impact the performance of various nighttime activities,
such as autonomous driving, surveillance systems, and night photography. While
existing methods designed for low-light enhancement or deraining show promising
performance, they have limitations in simultaneously addressing the task of
brightening low light and removing rain. Furthermore, using a cascade approach,
such as ``deraining followed by low-light enhancement'' or vice versa, may lead
to difficult-to-handle rain patterns or excessively blurred and overexposed
images. To overcome these limitations, we propose an end-to-end network called
which can jointly handle low-light enhancement and deraining. Our
network mainly includes a Pairwise Degradation Feature Vector Extraction
Network (P-Net) and a Restoration Network (R-Net). P-Net can learn degradation
feature vectors on the dark and light areas separately, using contrastive
learning to guide the image restoration process. The R-Net is responsible for
restoring the image. We also introduce an effective Fast Fourier - ResNet
Detail Guidance Module (FFR-DG) that initially guides image restoration using
detail image that do not contain degradation information but focus on texture
detail information. Additionally, we contribute a dataset containing synthetic
and real-world low-light-rainy images. Extensive experiments demonstrate that
our outperforms existing methods in both synthetic and complex
real-world scenarios
RCDNet: An Interpretable Rain Convolutional Dictionary Network for Single Image Deraining
As a common weather, rain streaks adversely degrade the image quality. Hence,
removing rains from an image has become an important issue in the field. To
handle such an ill-posed single image deraining task, in this paper, we
specifically build a novel deep architecture, called rain convolutional
dictionary network (RCDNet), which embeds the intrinsic priors of rain streaks
and has clear interpretability. In specific, we first establish a RCD model for
representing rain streaks and utilize the proximal gradient descent technique
to design an iterative algorithm only containing simple operators for solving
the model. By unfolding it, we then build the RCDNet in which every network
module has clear physical meanings and corresponds to each operation involved
in the algorithm. This good interpretability greatly facilitates an easy
visualization and analysis on what happens inside the network and why it works
well in inference process. Moreover, taking into account the domain gap issue
in real scenarios, we further design a novel dynamic RCDNet, where the rain
kernels can be dynamically inferred corresponding to input rainy images and
then help shrink the space for rain layer estimation with few rain maps so as
to ensure a fine generalization performance in the inconsistent scenarios of
rain types between training and testing data. By end-to-end training such an
interpretable network, all involved rain kernels and proximal operators can be
automatically extracted, faithfully characterizing the features of both rain
and clean background layers, and thus naturally lead to better deraining
performance. Comprehensive experiments substantiate the superiority of our
method, especially on its well generality to diverse testing scenarios and good
interpretability for all its modules. Code is available in
\emph{\url{https://github.com/hongwang01/DRCDNet}}