65 research outputs found
Single-Image Deraining via Recurrent Residual Multiscale Networks.
Existing deraining approaches represent rain streaks with different rain layers and then separate the layers from the background image. However, because of the complexity of real-world rain, such as various densities, shapes, and directions of rain streaks, it is very difficult to decompose a rain image into clean background and rain layers. In this article, we develop a novel single-image deraining method based on residual multiscale pyramid to mitigate the difficulty of rain image decomposition. To be specific, we progressively remove rain streaks in a coarse-to-fine fashion, where heavy rain is first removed in coarse-resolution levels and then light rain is eliminated in fine-resolution levels. Furthermore, based on the observation that residuals between a restored image and its corresponding rain image give critical clues of rain streaks, we regard the residuals as an attention map to remove rains in the consecutive finer level image. To achieve a powerful yet compact deraining framework, we construct our network by recurrent layers and remove rain with the same network in different pyramid levels. In addition, we design a multiscale kernel selection network (MSKSN) to facilitate our single network to remove rain streaks at different levels. In this manner, we reduce 81% of the model parameters without decreasing deraining performance compared with our prior work. Extensive experimental results on widely used benchmarks show that our approach achieves superior deraining performance compared with the state of the art
Single Image Deraining via Rain-Steaks Aware Deep Convolutional Neural Network
It is challenging to remove rain-steaks from a single rainy image because the
rain steaks are spatially varying in the rainy image. This problem is studied
in this paper by combining conventional image processing techniques and deep
learning based techniques. An improved weighted guided image filter (iWGIF) is
proposed to extract high frequency information from a rainy image. The high
frequency information mainly includes rain steaks and noise, and it can guide
the rain steaks aware deep convolutional neural network (RSADCNN) to pay more
attention to rain steaks. The efficiency and explain-ability of RSADNN are
improved. Experiments show that the proposed algorithm significantly
outperforms state-of-the-art methods on both synthetic and real-world images in
terms of both qualitative and quantitative measures. It is useful for
autonomous navigation in raining conditions
Learning Image Deraining Transformer Network with Dynamic Dual Self-Attention
Recently, Transformer-based architecture has been introduced into single
image deraining task due to its advantage in modeling non-local information.
However, existing approaches tend to integrate global features based on a dense
self-attention strategy since it tend to uses all similarities of the tokens
between the queries and keys. In fact, this strategy leads to ignoring the most
relevant information and inducing blurry effect by the irrelevant
representations during the feature aggregation. To this end, this paper
proposes an effective image deraining Transformer with dynamic dual
self-attention (DDSA), which combines both dense and sparse attention
strategies to better facilitate clear image reconstruction. Specifically, we
only select the most useful similarity values based on top-k approximate
calculation to achieve sparse attention. In addition, we also develop a novel
spatial-enhanced feed-forward network (SEFN) to further obtain a more accurate
representation for achieving high-quality derained results. Extensive
experiments on benchmark datasets demonstrate the effectiveness of our proposed
method.Comment: 6 pages, 5 figure
- …