105 research outputs found
RCDNet: An Interpretable Rain Convolutional Dictionary Network for Single Image Deraining
As a common weather, rain streaks adversely degrade the image quality. Hence,
removing rains from an image has become an important issue in the field. To
handle such an ill-posed single image deraining task, in this paper, we
specifically build a novel deep architecture, called rain convolutional
dictionary network (RCDNet), which embeds the intrinsic priors of rain streaks
and has clear interpretability. In specific, we first establish a RCD model for
representing rain streaks and utilize the proximal gradient descent technique
to design an iterative algorithm only containing simple operators for solving
the model. By unfolding it, we then build the RCDNet in which every network
module has clear physical meanings and corresponds to each operation involved
in the algorithm. This good interpretability greatly facilitates an easy
visualization and analysis on what happens inside the network and why it works
well in inference process. Moreover, taking into account the domain gap issue
in real scenarios, we further design a novel dynamic RCDNet, where the rain
kernels can be dynamically inferred corresponding to input rainy images and
then help shrink the space for rain layer estimation with few rain maps so as
to ensure a fine generalization performance in the inconsistent scenarios of
rain types between training and testing data. By end-to-end training such an
interpretable network, all involved rain kernels and proximal operators can be
automatically extracted, faithfully characterizing the features of both rain
and clean background layers, and thus naturally lead to better deraining
performance. Comprehensive experiments substantiate the superiority of our
method, especially on its well generality to diverse testing scenarios and good
interpretability for all its modules. Code is available in
\emph{\url{https://github.com/hongwang01/DRCDNet}}
Single Image Deraining via Rain-Steaks Aware Deep Convolutional Neural Network
It is challenging to remove rain-steaks from a single rainy image because the
rain steaks are spatially varying in the rainy image. This problem is studied
in this paper by combining conventional image processing techniques and deep
learning based techniques. An improved weighted guided image filter (iWGIF) is
proposed to extract high frequency information from a rainy image. The high
frequency information mainly includes rain steaks and noise, and it can guide
the rain steaks aware deep convolutional neural network (RSADCNN) to pay more
attention to rain steaks. The efficiency and explain-ability of RSADNN are
improved. Experiments show that the proposed algorithm significantly
outperforms state-of-the-art methods on both synthetic and real-world images in
terms of both qualitative and quantitative measures. It is useful for
autonomous navigation in raining conditions
Uni-Removal: A Semi-Supervised Framework for Simultaneously Addressing Multiple Degradations in Real-World Images
Removing multiple degradations, such as haze, rain, and blur, from real-world
images poses a challenging and illposed problem. Recently, unified models that
can handle different degradations have been proposed and yield promising
results. However, these approaches focus on synthetic images and experience a
significant performance drop when applied to realworld images. In this paper,
we introduce Uni-Removal, a twostage semi-supervised framework for addressing
the removal of multiple degradations in real-world images using a unified model
and parameters. In the knowledge transfer stage, Uni-Removal leverages a
supervised multi-teacher and student architecture in the knowledge transfer
stage to facilitate learning from pretrained teacher networks specialized in
different degradation types. A multi-grained contrastive loss is introduced to
enhance learning from feature and image spaces. In the domain adaptation stage,
unsupervised fine-tuning is performed by incorporating an adversarial
discriminator on real-world images. The integration of an extended
multi-grained contrastive loss and generative adversarial loss enables the
adaptation of the student network from synthetic to real-world domains.
Extensive experiments on real-world degraded datasets demonstrate the
effectiveness of our proposed method. We compare our Uni-Removal framework with
state-of-the-art supervised and unsupervised methods, showcasing its promising
results in real-world image dehazing, deraining, and deblurring simultaneously
- …