37 research outputs found
Seven ways to improve example-based single image super resolution
In this paper we present seven techniques that everybody should know to
improve example-based single image super resolution (SR): 1) augmentation of
data, 2) use of large dictionaries with efficient search structures, 3)
cascading, 4) image self-similarities, 5) back projection refinement, 6)
enhanced prediction by consistency check, and 7) context reasoning. We validate
our seven techniques on standard SR benchmarks (i.e. Set5, Set14, B100) and
methods (i.e. A+, SRCNN, ANR, Zeyde, Yang) and achieve substantial
improvements.The techniques are widely applicable and require no changes or
only minor adjustments of the SR methods. Moreover, our Improved A+ (IA) method
sets new state-of-the-art results outperforming A+ by up to 0.9dB on average
PSNR whilst maintaining a low time complexity.Comment: 9 page
Cascaded Detail-Preserving Networks for Super-Resolution of Document Images
The accuracy of OCR is usually affected by the quality of the input document
image and different kinds of marred document images hamper the OCR results.
Among these scenarios, the low-resolution image is a common and challenging
case. In this paper, we propose the cascaded networks for document image
super-resolution. Our model is composed by the Detail-Preserving Networks with
small magnification. The loss function with perceptual terms is designed to
simultaneously preserve the original patterns and enhance the edge of the
characters. These networks are trained with the same architecture and different
parameters and then assembled into a pipeline model with a larger
magnification. The low-resolution images can upscale gradually by passing
through each Detail-Preserving Network until the final high-resolution images.
Through extensive experiments on two scanning document image datasets, we
demonstrate that the proposed approach outperforms recent state-of-the-art
image super-resolution methods, and combining it with standard OCR system lead
to signification improvements on the recognition results
Learning Deep CNN Denoiser Prior for Image Restoration
Model-based optimization methods and discriminative learning methods have
been the two dominant strategies for solving various inverse problems in
low-level vision. Typically, those two kinds of methods have their respective
merits and drawbacks, e.g., model-based optimization methods are flexible for
handling different inverse problems but are usually time-consuming with
sophisticated priors for the purpose of good performance; in the meanwhile,
discriminative learning methods have fast testing speed but their application
range is greatly restricted by the specialized task. Recent works have revealed
that, with the aid of variable splitting techniques, denoiser prior can be
plugged in as a modular part of model-based optimization methods to solve other
inverse problems (e.g., deblurring). Such an integration induces considerable
advantage when the denoiser is obtained via discriminative learning. However,
the study of integration with fast discriminative denoiser prior is still
lacking. To this end, this paper aims to train a set of fast and effective CNN
(convolutional neural network) denoisers and integrate them into model-based
optimization method to solve other inverse problems. Experimental results
demonstrate that the learned set of denoisers not only achieve promising
Gaussian denoising results but also can be used as prior to deliver good
performance for various low-level vision applications.Comment: Accepted to CVPR 2017. Code: https://github.com/cszn/ircn