18 research outputs found
Designing A Composite Dictionary Adaptively From Joint Examples
We study the complementary behaviors of external and internal examples in
image restoration, and are motivated to formulate a composite dictionary design
framework. The composite dictionary consists of the global part learned from
external examples, and the sample-specific part learned from internal examples.
The dictionary atoms in both parts are further adaptively weighted to emphasize
their model statistics. Experiments demonstrate that the joint utilization of
external and internal examples leads to substantial improvements, with
successful applications in image denoising and super resolution
Learning a Mixture of Deep Networks for Single Image Super-Resolution
Single image super-resolution (SR) is an ill-posed problem which aims to
recover high-resolution (HR) images from their low-resolution (LR)
observations. The crux of this problem lies in learning the complex mapping
between low-resolution patches and the corresponding high-resolution patches.
Prior arts have used either a mixture of simple regression models or a single
non-linear neural network for this propose. This paper proposes the method of
learning a mixture of SR inference modules in a unified framework to tackle
this problem. Specifically, a number of SR inference modules specialized in
different image local patterns are first independently applied on the LR image
to obtain various HR estimates, and the resultant HR estimates are adaptively
aggregated to form the final HR image. By selecting neural networks as the SR
inference module, the whole procedure can be incorporated into a unified
network and be optimized jointly. Extensive experiments are conducted to
investigate the relation between restoration performance and different network
architectures. Compared with other current image SR approaches, our proposed
method achieves state-of-the-arts restoration results on a wide range of images
consistently while allowing more flexible design choices. The source codes are
available in http://www.ifp.illinois.edu/~dingliu2/accv2016
Image Restoration Using Very Deep Convolutional Encoder-Decoder Networks with Symmetric Skip Connections
In this paper, we propose a very deep fully convolutional encoding-decoding
framework for image restoration such as denoising and super-resolution. The
network is composed of multiple layers of convolution and de-convolution
operators, learning end-to-end mappings from corrupted images to the original
ones. The convolutional layers act as the feature extractor, which capture the
abstraction of image contents while eliminating noises/corruptions.
De-convolutional layers are then used to recover the image details. We propose
to symmetrically link convolutional and de-convolutional layers with skip-layer
connections, with which the training converges much faster and attains a
higher-quality local optimum. First, The skip connections allow the signal to
be back-propagated to bottom layers directly, and thus tackles the problem of
gradient vanishing, making training deep networks easier and achieving
restoration performance gains consequently. Second, these skip connections pass
image details from convolutional layers to de-convolutional layers, which is
beneficial in recovering the original image. Significantly, with the large
capacity, we can handle different levels of noises using a single model.
Experimental results show that our network achieves better performance than all
previously reported state-of-the-art methods.Comment: Accepted to Proc. Advances in Neural Information Processing Systems
(NIPS'16). Content of the final version may be slightly different. Extended
version is available at http://arxiv.org/abs/1606.0892
Self-Tuned Deep Super Resolution
Deep learning has been successfully applied to image super resolution (SR).
In this paper, we propose a deep joint super resolution (DJSR) model to exploit
both external and self similarities for SR. A Stacked Denoising Convolutional
Auto Encoder (SDCAE) is first pre-trained on external examples with proper data
augmentations. It is then fine-tuned with multi-scale self examples from each
input, where the reliability of self examples is explicitly taken into account.
We also enhance the model performance by sub-model training and selection. The
DJSR model is extensively evaluated and compared with state-of-the-arts, and
show noticeable performance improvements both quantitatively and perceptually
on a wide range of images
Enhanced Deep Residual Networks for Single Image Super-Resolution
Recent research on super-resolution has progressed with the development of
deep convolutional neural networks (DCNN). In particular, residual learning
techniques exhibit improved performance. In this paper, we develop an enhanced
deep super-resolution network (EDSR) with performance exceeding those of
current state-of-the-art SR methods. The significant performance improvement of
our model is due to optimization by removing unnecessary modules in
conventional residual networks. The performance is further improved by
expanding the model size while we stabilize the training procedure. We also
propose a new multi-scale deep super-resolution system (MDSR) and training
method, which can reconstruct high-resolution images of different upscaling
factors in a single model. The proposed methods show superior performance over
the state-of-the-art methods on benchmark datasets and prove its excellence by
winning the NTIRE2017 Super-Resolution Challenge.Comment: To appear in CVPR 2017 workshop. Best paper award of the NTIRE2017
workshop, and the winners of the NTIRE2017 Challenge on Single Image
Super-Resolutio
Deep Networks for Image Super-Resolution with Sparse Prior
Deep learning techniques have been successfully applied in many areas of
computer vision, including low-level image restoration problems. For image
super-resolution, several models based on deep neural networks have been
recently proposed and attained superior performance that overshadows all
previous handcrafted models. The question then arises whether large-capacity
and data-driven models have become the dominant solution to the ill-posed
super-resolution problem. In this paper, we argue that domain expertise
represented by the conventional sparse coding model is still valuable, and it
can be combined with the key ingredients of deep learning to achieve further
improved results. We show that a sparse coding model particularly designed for
super-resolution can be incarnated as a neural network, and trained in a
cascaded structure from end to end. The interpretation of the network based on
sparse coding leads to much more efficient and effective training, as well as a
reduced model size. Our model is evaluated on a wide range of images, and shows
clear advantage over existing state-of-the-art methods in terms of both
restoration accuracy and human subjective quality