263 research outputs found
Seven ways to improve example-based single image super resolution
In this paper we present seven techniques that everybody should know to
improve example-based single image super resolution (SR): 1) augmentation of
data, 2) use of large dictionaries with efficient search structures, 3)
cascading, 4) image self-similarities, 5) back projection refinement, 6)
enhanced prediction by consistency check, and 7) context reasoning. We validate
our seven techniques on standard SR benchmarks (i.e. Set5, Set14, B100) and
methods (i.e. A+, SRCNN, ANR, Zeyde, Yang) and achieve substantial
improvements.The techniques are widely applicable and require no changes or
only minor adjustments of the SR methods. Moreover, our Improved A+ (IA) method
sets new state-of-the-art results outperforming A+ by up to 0.9dB on average
PSNR whilst maintaining a low time complexity.Comment: 9 page
Some like it hot - visual guidance for preference prediction
For people first impressions of someone are of determining importance. They
are hard to alter through further information. This begs the question if a
computer can reach the same judgement. Earlier research has already pointed out
that age, gender, and average attractiveness can be estimated with reasonable
precision. We improve the state-of-the-art, but also predict - based on
someone's known preferences - how much that particular person is attracted to a
novel face. Our computational pipeline comprises a face detector, convolutional
neural networks for the extraction of deep features, standard support vector
regression for gender, age and facial beauty, and - as the main novelties -
visual regularized collaborative filtering to infer inter-person preferences as
well as a novel regression technique for handling visual queries without rating
history. We validate the method using a very large dataset from a dating site
as well as images from celebrities. Our experiments yield convincing results,
i.e. we predict 76% of the ratings correctly solely based on an image, and
reveal some sociologically relevant conclusions. We also validate our
collaborative filtering solution on the standard MovieLens rating dataset,
augmented with movie posters, to predict an individual's movie rating. We
demonstrate our algorithms on howhot.io which went viral around the Internet
with more than 50 million pictures evaluated in the first month.Comment: accepted for publication at CVPR 201
A Study of Forward-Forward Algorithm for Self-Supervised Learning
Self-supervised representation learning has seen remarkable progress in the
last few years, with some of the recent methods being able to learn useful
image representations without labels. These methods are trained using
backpropagation, the de facto standard. Recently, Geoffrey Hinton proposed the
forward-forward algorithm as an alternative training method. It utilizes two
forward passes and a separate loss function for each layer to train the network
without backpropagation.
In this study, for the first time, we study the performance of
forward-forward vs. backpropagation for self-supervised representation learning
and provide insights into the learned representation spaces. Our benchmark
employs four standard datasets, namely MNIST, F-MNIST, SVHN and CIFAR-10, and
three commonly used self-supervised representation learning techniques, namely
rotation, flip and jigsaw.
Our main finding is that while the forward-forward algorithm performs
comparably to backpropagation during (self-)supervised training, the transfer
performance is significantly lagging behind in all the studied settings. This
may be caused by a combination of factors, including having a loss function for
each layer and the way the supervised training is realized in the
forward-forward paradigm. In comparison to backpropagation, the forward-forward
algorithm focuses more on the boundaries and drops part of the information
unnecessary for making decisions which harms the representation learning goal.
Further investigation and research are necessary to stabilize the
forward-forward strategy for self-supervised learning, to work beyond the
datasets and configurations demonstrated by Geoffrey Hinton
- …