3,238 research outputs found
Unsupervised Learning of Long-Term Motion Dynamics for Videos
We present an unsupervised representation learning approach that compactly
encodes the motion dependencies in videos. Given a pair of images from a video
clip, our framework learns to predict the long-term 3D motions. To reduce the
complexity of the learning framework, we propose to describe the motion as a
sequence of atomic 3D flows computed with RGB-D modality. We use a Recurrent
Neural Network based Encoder-Decoder framework to predict these sequences of
flows. We argue that in order for the decoder to reconstruct these sequences,
the encoder must learn a robust video representation that captures long-term
motion dependencies and spatial-temporal relations. We demonstrate the
effectiveness of our learned temporal representations on activity
classification across multiple modalities and datasets such as NTU RGB+D and
MSR Daily Activity 3D. Our framework is generic to any input modality, i.e.,
RGB, Depth, and RGB-D videos.Comment: CVPR 201
Generative Image Modeling Using Spatial LSTMs
Modeling the distribution of natural images is challenging, partly because of
strong statistical dependencies which can extend over hundreds of pixels.
Recurrent neural networks have been successful in capturing long-range
dependencies in a number of problems but only recently have found their way
into generative image models. We here introduce a recurrent image model based
on multi-dimensional long short-term memory units which are particularly suited
for image modeling due to their spatial structure. Our model scales to images
of arbitrary size and its likelihood is computationally tractable. We find that
it outperforms the state of the art in quantitative comparisons on several
image datasets and produces promising results when used for texture synthesis
and inpainting
A Disentangled Recognition and Nonlinear Dynamics Model for Unsupervised Learning
This paper takes a step towards temporal reasoning in a dynamically changing
video, not in the pixel space that constitutes its frames, but in a latent
space that describes the non-linear dynamics of the objects in its world. We
introduce the Kalman variational auto-encoder, a framework for unsupervised
learning of sequential data that disentangles two latent representations: an
object's representation, coming from a recognition model, and a latent state
describing its dynamics. As a result, the evolution of the world can be
imagined and missing data imputed, both without the need to generate high
dimensional frames at each time step. The model is trained end-to-end on videos
of a variety of simulated physical systems, and outperforms competing methods
in generative and missing data imputation tasks.Comment: NIPS 201
- …