3,902 research outputs found
Deep Graph Laplacian Regularization for Robust Denoising of Real Images
Recent developments in deep learning have revolutionized the paradigm of
image restoration. However, its applications on real image denoising are still
limited, due to its sensitivity to training data and the complex nature of real
image noise. In this work, we combine the robustness merit of model-based
approaches and the learning power of data-driven approaches for real image
denoising. Specifically, by integrating graph Laplacian regularization as a
trainable module into a deep learning framework, we are less susceptible to
overfitting than pure CNN-based approaches, achieving higher robustness to
small datasets and cross-domain denoising. First, a sparse neighborhood graph
is built from the output of a convolutional neural network (CNN). Then the
image is restored by solving an unconstrained quadratic programming problem,
using a corresponding graph Laplacian regularizer as a prior term. The proposed
restoration pipeline is fully differentiable and hence can be end-to-end
trained. Experimental results demonstrate that our work is less prone to
overfitting given small training data. It is also endowed with strong
cross-domain generalization power, outperforming the state-of-the-art
approaches by a remarkable margin
LIDIA: Lightweight Learned Image Denoising with Instance Adaptation
Image denoising is a well studied problem with an extensive activity that has
spread over several decades. Despite the many available denoising algorithms,
the quest for simple, powerful and fast denoisers is still an active and
vibrant topic of research. Leading classical denoising methods are typically
designed to exploit the inner structure in images by modeling local overlapping
patches, while operating in an unsupervised fashion. In contrast, recent
newcomers to this arena are supervised and universal neural-network-based
methods that bypass this modeling altogether, targeting the inference goal
directly and globally, while tending to be very deep and parameter heavy.
This work proposes a novel lightweight learnable architecture for image
denoising, and presents a combination of supervised and unsupervised training
of it, the first aiming for a universal denoiser and the second for adapting it
to the incoming image. Our architecture embeds in it several of the main
concepts taken from classical methods, relying on patch processing, leveraging
non-local self-similarity, exploiting representation sparsity and providing a
multiscale treatment. Our proposed universal denoiser achieves near
state-of-the-art results, while using a small fraction of the typical number of
parameters. In addition, we introduce and demonstrate two highly effective ways
for further boosting the denoising performance, by adapting this universal
network to the input image
AutoEncoder by Forest
Auto-encoding is an important task which is typically realized by deep neural
networks (DNNs) such as convolutional neural networks (CNN). In this paper, we
propose EncoderForest (abbrv. eForest), the first tree ensemble based
auto-encoder. We present a procedure for enabling forests to do backward
reconstruction by utilizing the equivalent classes defined by decision paths of
the trees, and demonstrate its usage in both supervised and unsupervised
setting. Experiments show that, compared with DNN autoencoders, eForest is able
to obtain lower reconstruction error with fast training speed, while the model
itself is reusable and damage-tolerable
Medical image denoising using convolutional denoising autoencoders
Image denoising is an important pre-processing step in medical image
analysis. Different algorithms have been proposed in past three decades with
varying denoising performances. More recently, having outperformed all
conventional methods, deep learning based models have shown a great promise.
These methods are however limited for requirement of large training sample size
and high computational costs. In this paper we show that using small sample
size, denoising autoencoders constructed using convolutional layers can be used
for efficient denoising of medical images. Heterogeneous images can be combined
to boost sample size for increased denoising performance. Simplest of networks
can reconstruct images with corruption levels so high that noise and signal are
not differentiable to human eye.Comment: To appear: 6 pages, paper to be published at the Fourth Workshop on
Data Mining in Biomedical Informatics and Healthcare at ICDM, 201
- …