17 research outputs found
Seven ways to improve example-based single image super resolution
In this paper we present seven techniques that everybody should know to
improve example-based single image super resolution (SR): 1) augmentation of
data, 2) use of large dictionaries with efficient search structures, 3)
cascading, 4) image self-similarities, 5) back projection refinement, 6)
enhanced prediction by consistency check, and 7) context reasoning. We validate
our seven techniques on standard SR benchmarks (i.e. Set5, Set14, B100) and
methods (i.e. A+, SRCNN, ANR, Zeyde, Yang) and achieve substantial
improvements.The techniques are widely applicable and require no changes or
only minor adjustments of the SR methods. Moreover, our Improved A+ (IA) method
sets new state-of-the-art results outperforming A+ by up to 0.9dB on average
PSNR whilst maintaining a low time complexity.Comment: 9 page
Deep Autoencoder for Combined Human Pose Estimation and body Model Upscaling
We present a method for simultaneously estimating 3D human pose and body
shape from a sparse set of wide-baseline camera views. We train a symmetric
convolutional autoencoder with a dual loss that enforces learning of a latent
representation that encodes skeletal joint positions, and at the same time
learns a deep representation of volumetric body shape. We harness the latter to
up-scale input volumetric data by a factor of , whilst recovering a
3D estimate of joint positions with equal or greater accuracy than the state of
the art. Inference runs in real-time (25 fps) and has the potential for passive
human behaviour monitoring where there is a requirement for high fidelity
estimation of human body shape and pose
How Does the Low-Rank Matrix Decomposition Help Internal and External Learnings for Super-Resolution
Wisely utilizing the internal and external learning methods is a new
challenge in super-resolution problem. To address this issue, we analyze the
attributes of two methodologies and find two observations of their recovered
details: 1) they are complementary in both feature space and image plane, 2)
they distribute sparsely in the spatial space. These inspire us to propose a
low-rank solution which effectively integrates two learning methods and then
achieves a superior result. To fit this solution, the internal learning method
and the external learning method are tailored to produce multiple preliminary
results. Our theoretical analysis and experiment prove that the proposed
low-rank solution does not require massive inputs to guarantee the performance,
and thereby simplifying the design of two learning methods for the solution.
Intensive experiments show the proposed solution improves the single learning
method in both qualitative and quantitative assessments. Surprisingly, it shows
more superior capability on noisy images and outperforms state-of-the-art
methods