15,533 research outputs found
Slow and steady feature analysis: higher order temporal coherence in video
How can unlabeled video augment visual learning? Existing methods perform
"slow" feature analysis, encouraging the representations of temporally close
frames to exhibit only small differences. While this standard approach captures
the fact that high-level visual signals change slowly over time, it fails to
capture *how* the visual content changes. We propose to generalize slow feature
analysis to "steady" feature analysis. The key idea is to impose a prior that
higher order derivatives in the learned feature space must be small. To this
end, we train a convolutional neural network with a regularizer on tuples of
sequential frames from unlabeled video. It encourages feature changes over time
to be smooth, i.e., similar to the most recent changes. Using five diverse
datasets, including unlabeled YouTube and KITTI videos, we demonstrate our
method's impact on object, scene, and action recognition tasks. We further show
that our features learned from unlabeled video can even surpass a standard
heavily supervised pretraining approach.Comment: in Computer Vision and Pattern Recognition (CVPR) 2016, Las Vegas,
NV, June 201
Anticipating Visual Representations from Unlabeled Video
Anticipating actions and objects before they start or appear is a difficult
problem in computer vision with several real-world applications. This task is
challenging partly because it requires leveraging extensive knowledge of the
world that is difficult to write down. We believe that a promising resource for
efficiently learning this knowledge is through readily available unlabeled
video. We present a framework that capitalizes on temporal structure in
unlabeled video to learn to anticipate human actions and objects. The key idea
behind our approach is that we can train deep networks to predict the visual
representation of images in the future. Visual representations are a promising
prediction target because they encode images at a higher semantic level than
pixels yet are automatic to compute. We then apply recognition algorithms on
our predicted representation to anticipate objects and actions. We
experimentally validate this idea on two datasets, anticipating actions one
second in the future and objects five seconds in the future.Comment: CVPR 201
Recent Advances in Transfer Learning for Cross-Dataset Visual Recognition: A Problem-Oriented Perspective
This paper takes a problem-oriented perspective and presents a comprehensive
review of transfer learning methods, both shallow and deep, for cross-dataset
visual recognition. Specifically, it categorises the cross-dataset recognition
into seventeen problems based on a set of carefully chosen data and label
attributes. Such a problem-oriented taxonomy has allowed us to examine how
different transfer learning approaches tackle each problem and how well each
problem has been researched to date. The comprehensive problem-oriented review
of the advances in transfer learning with respect to the problem has not only
revealed the challenges in transfer learning for visual recognition, but also
the problems (e.g. eight of the seventeen problems) that have been scarcely
studied. This survey not only presents an up-to-date technical review for
researchers, but also a systematic approach and a reference for a machine
learning practitioner to categorise a real problem and to look up for a
possible solution accordingly
Look, Listen and Learn
We consider the question: what can be learnt by looking at and listening to a
large number of unlabelled videos? There is a valuable, but so far untapped,
source of information contained in the video itself -- the correspondence
between the visual and the audio streams, and we introduce a novel
"Audio-Visual Correspondence" learning task that makes use of this. Training
visual and audio networks from scratch, without any additional supervision
other than the raw unconstrained videos themselves, is shown to successfully
solve this task, and, more interestingly, result in good visual and audio
representations. These features set the new state-of-the-art on two sound
classification benchmarks, and perform on par with the state-of-the-art
self-supervised approaches on ImageNet classification. We also demonstrate that
the network is able to localize objects in both modalities, as well as perform
fine-grained recognition tasks.Comment: Appears in: IEEE International Conference on Computer Vision (ICCV)
201
- …