713,376 research outputs found
Image Restoration using Total Variation Regularized Deep Image Prior
In the past decade, sparsity-driven regularization has led to significant
improvements in image reconstruction. Traditional regularizers, such as total
variation (TV), rely on analytical models of sparsity. However, increasingly
the field is moving towards trainable models, inspired from deep learning. Deep
image prior (DIP) is a recent regularization framework that uses a
convolutional neural network (CNN) architecture without data-driven training.
This paper extends the DIP framework by combining it with the traditional TV
regularization. We show that the inclusion of TV leads to considerable
performance gains when tested on several traditional restoration tasks such as
image denoising and deblurring
Learning Deep CNN Denoiser Prior for Image Restoration
Model-based optimization methods and discriminative learning methods have
been the two dominant strategies for solving various inverse problems in
low-level vision. Typically, those two kinds of methods have their respective
merits and drawbacks, e.g., model-based optimization methods are flexible for
handling different inverse problems but are usually time-consuming with
sophisticated priors for the purpose of good performance; in the meanwhile,
discriminative learning methods have fast testing speed but their application
range is greatly restricted by the specialized task. Recent works have revealed
that, with the aid of variable splitting techniques, denoiser prior can be
plugged in as a modular part of model-based optimization methods to solve other
inverse problems (e.g., deblurring). Such an integration induces considerable
advantage when the denoiser is obtained via discriminative learning. However,
the study of integration with fast discriminative denoiser prior is still
lacking. To this end, this paper aims to train a set of fast and effective CNN
(convolutional neural network) denoisers and integrate them into model-based
optimization method to solve other inverse problems. Experimental results
demonstrate that the learned set of denoisers not only achieve promising
Gaussian denoising results but also can be used as prior to deliver good
performance for various low-level vision applications.Comment: Accepted to CVPR 2017. Code: https://github.com/cszn/ircn
Deep Networks for Image Super-Resolution with Sparse Prior
Deep learning techniques have been successfully applied in many areas of
computer vision, including low-level image restoration problems. For image
super-resolution, several models based on deep neural networks have been
recently proposed and attained superior performance that overshadows all
previous handcrafted models. The question then arises whether large-capacity
and data-driven models have become the dominant solution to the ill-posed
super-resolution problem. In this paper, we argue that domain expertise
represented by the conventional sparse coding model is still valuable, and it
can be combined with the key ingredients of deep learning to achieve further
improved results. We show that a sparse coding model particularly designed for
super-resolution can be incarnated as a neural network, and trained in a
cascaded structure from end to end. The interpretation of the network based on
sparse coding leads to much more efficient and effective training, as well as a
reduced model size. Our model is evaluated on a wide range of images, and shows
clear advantage over existing state-of-the-art methods in terms of both
restoration accuracy and human subjective quality
Image Reconstruction via Deep Image Prior Subspaces
Deep learning has been widely used for solving image reconstruction tasks but
its deployability has been held back due to the shortage of high-quality
training data. Unsupervised learning methods, such as the deep image prior
(DIP), naturally fill this gap, but bring a host of new issues: the
susceptibility to overfitting due to a lack of robust early stopping strategies
and unstable convergence. We present a novel approach to tackle these issues by
restricting DIP optimisation to a sparse linear subspace of its parameters,
employing a synergy of dimensionality reduction techniques and second order
optimisation methods. The low-dimensionality of the subspace reduces DIP's
tendency to fit noise and allows the use of stable second order optimisation
methods, e.g., natural gradient descent or L-BFGS. Experiments across both
image restoration and tomographic tasks of different geometry and ill-posedness
show that second order optimisation within a low-dimensional subspace is
favourable in terms of optimisation stability to reconstruction fidelity
trade-off
Deep Image Prior Amplitude SAR Image Anonymization
This paper presents an extensive evaluation of the Deep Image Prior (DIP) technique for image inpainting on Synthetic Aperture Radar (SAR) images. SAR images are gaining popularity in various applications, but there may be a need to conceal certain regions of them. Image inpainting provides a solution for this. However, not all inpainting techniques are designed to work on SAR images. Some are intended for use on photographs, while others have to be specifically trained on top of a huge set of images. In this work, we evaluate the performance of the DIP technique that is capable of addressing these challenges: it can adapt to the image under analysis including SAR imagery; it does not require any training. Our results demonstrate that the DIP method achieves great performance in terms of objective and semantic metrics. This indicates that the DIP method is a promising approach for inpainting SAR images, and can provide high-quality results that meet the requirements of various applications
- …