38,030 research outputs found
Newtonian Image Understanding: Unfolding the Dynamics of Objects in Static Images
In this paper, we study the challenging problem of predicting the dynamics of
objects in static images. Given a query object in an image, our goal is to
provide a physical understanding of the object in terms of the forces acting
upon it and its long term motion as response to those forces. Direct and
explicit estimation of the forces and the motion of objects from a single image
is extremely challenging. We define intermediate physical abstractions called
Newtonian scenarios and introduce Newtonian Neural Network () that learns
to map a single image to a state in a Newtonian scenario. Our experimental
evaluations show that our method can reliably predict dynamics of a query
object from a single image. In addition, our approach can provide physical
reasoning that supports the predicted dynamics in terms of velocity and force
vectors. To spur research in this direction we compiled Visual Newtonian
Dynamics (VIND) dataset that includes 6806 videos aligned with Newtonian
scenarios represented using game engines, and 4516 still images with their
ground truth dynamics
Unsupervised Learning of Long-Term Motion Dynamics for Videos
We present an unsupervised representation learning approach that compactly
encodes the motion dependencies in videos. Given a pair of images from a video
clip, our framework learns to predict the long-term 3D motions. To reduce the
complexity of the learning framework, we propose to describe the motion as a
sequence of atomic 3D flows computed with RGB-D modality. We use a Recurrent
Neural Network based Encoder-Decoder framework to predict these sequences of
flows. We argue that in order for the decoder to reconstruct these sequences,
the encoder must learn a robust video representation that captures long-term
motion dependencies and spatial-temporal relations. We demonstrate the
effectiveness of our learned temporal representations on activity
classification across multiple modalities and datasets such as NTU RGB+D and
MSR Daily Activity 3D. Our framework is generic to any input modality, i.e.,
RGB, Depth, and RGB-D videos.Comment: CVPR 201
- …