15,029 research outputs found
Two-stage Progressive Residual Dense Attention Network for Image Denoising
Deep convolutional neural networks (CNNs) for image denoising can effectively
exploit rich hierarchical features and have achieved great success. However,
many deep CNN-based denoising models equally utilize the hierarchical features
of noisy images without paying attention to the more important and useful
features, leading to relatively low performance. To address the issue, we
design a new Two-stage Progressive Residual Dense Attention Network
(TSP-RDANet) for image denoising, which divides the whole process of denoising
into two sub-tasks to remove noise progressively. Two different attention
mechanism-based denoising networks are designed for the two sequential
sub-tasks: the residual dense attention module (RDAM) is designed for the first
stage, and the hybrid dilated residual dense attention module (HDRDAM) is
proposed for the second stage. The proposed attention modules are able to learn
appropriate local features through dense connection between different
convolutional layers, and the irrelevant features can also be suppressed. The
two sub-networks are then connected by a long skip connection to retain the
shallow feature to enhance the denoising performance. The experiments on seven
benchmark datasets have verified that compared with many state-of-the-art
methods, the proposed TSP-RDANet can obtain favorable results both on synthetic
and real noisy image denoising. The code of our TSP-RDANet is available at
https://github.com/WenCongWu/TSP-RDANet
Neural Nearest Neighbors Networks
Non-local methods exploiting the self-similarity of natural signals have been
well studied, for example in image analysis and restoration. Existing
approaches, however, rely on k-nearest neighbors (KNN) matching in a fixed
feature space. The main hurdle in optimizing this feature space w.r.t.
application performance is the non-differentiability of the KNN selection rule.
To overcome this, we propose a continuous deterministic relaxation of KNN
selection that maintains differentiability w.r.t. pairwise distances, but
retains the original KNN as the limit of a temperature parameter approaching
zero. To exploit our relaxation, we propose the neural nearest neighbors block
(N3 block), a novel non-local processing layer that leverages the principle of
self-similarity and can be used as building block in modern neural network
architectures. We show its effectiveness for the set reasoning task of
correspondence classification as well as for image restoration, including image
denoising and single image super-resolution, where we outperform strong
convolutional neural network (CNN) baselines and recent non-local models that
rely on KNN selection in hand-chosen features spaces.Comment: to appear at NIPS*2018, code available at
https://github.com/visinf/n3net
Behaviourally meaningful representations from normalisation and context-guided denoising
Many existing independent component analysis algorithms include a preprocessing stage where the inputs are sphered. This amounts to normalising the data such that all correlations between the variables are removed. In this work, I show that sphering allows very weak contextual modulation to steer the development of meaningful features. Context-biased competition has been proposed as a model of covert attention and I propose that sphering-like normalisation also allows weaker top-down bias to guide attention
- …