36 research outputs found
Anticipating Visual Representations from Unlabeled Video
Anticipating actions and objects before they start or appear is a difficult
problem in computer vision with several real-world applications. This task is
challenging partly because it requires leveraging extensive knowledge of the
world that is difficult to write down. We believe that a promising resource for
efficiently learning this knowledge is through readily available unlabeled
video. We present a framework that capitalizes on temporal structure in
unlabeled video to learn to anticipate human actions and objects. The key idea
behind our approach is that we can train deep networks to predict the visual
representation of images in the future. Visual representations are a promising
prediction target because they encode images at a higher semantic level than
pixels yet are automatic to compute. We then apply recognition algorithms on
our predicted representation to anticipate objects and actions. We
experimentally validate this idea on two datasets, anticipating actions one
second in the future and objects five seconds in the future.Comment: CVPR 201
Learning Temporal Transformations From Time-Lapse Videos
Based on life-long observations of physical, chemical, and biologic phenomena
in the natural world, humans can often easily picture in their minds what an
object will look like in the future. But, what about computers? In this paper,
we learn computational models of object transformations from time-lapse videos.
In particular, we explore the use of generative models to create depictions of
objects at future times. These models explore several different prediction
tasks: generating a future state given a single depiction of an object,
generating a future state given two depictions of an object at different times,
and generating future states recursively in a recurrent framework. We provide
both qualitative and quantitative evaluations of the generated results, and
also conduct a human evaluation to compare variations of our models.Comment: ECCV201
Newtonian Image Understanding: Unfolding the Dynamics of Objects in Static Images
In this paper, we study the challenging problem of predicting the dynamics of
objects in static images. Given a query object in an image, our goal is to
provide a physical understanding of the object in terms of the forces acting
upon it and its long term motion as response to those forces. Direct and
explicit estimation of the forces and the motion of objects from a single image
is extremely challenging. We define intermediate physical abstractions called
Newtonian scenarios and introduce Newtonian Neural Network () that learns
to map a single image to a state in a Newtonian scenario. Our experimental
evaluations show that our method can reliably predict dynamics of a query
object from a single image. In addition, our approach can provide physical
reasoning that supports the predicted dynamics in terms of velocity and force
vectors. To spur research in this direction we compiled Visual Newtonian
Dynamics (VIND) dataset that includes 6806 videos aligned with Newtonian
scenarios represented using game engines, and 4516 still images with their
ground truth dynamics
Anticipating Daily Intention using On-Wrist Motion Triggered Sensing
Anticipating human intention by observing one's actions has many
applications. For instance, picking up a cellphone, then a charger (actions)
implies that one wants to charge the cellphone (intention). By anticipating the
intention, an intelligent system can guide the user to the closest power
outlet. We propose an on-wrist motion triggered sensing system for anticipating
daily intentions, where the on-wrist sensors help us to persistently observe
one's actions. The core of the system is a novel Recurrent Neural Network (RNN)
and Policy Network (PN), where the RNN encodes visual and motion observation to
anticipate intention, and the PN parsimoniously triggers the process of visual
observation to reduce computation requirement. We jointly trained the whole
network using policy gradient and cross-entropy loss. To evaluate, we collect
the first daily "intention" dataset consisting of 2379 videos with 34
intentions and 164 unique action sequences. Our method achieves 92.68%, 90.85%,
97.56% accuracy on three users while processing only 29% of the visual
observation on average
Im2Flow: Motion Hallucination from Static Images for Action Recognition
Existing methods to recognize actions in static images take the images at
their face value, learning the appearances---objects, scenes, and body
poses---that distinguish each action class. However, such models are deprived
of the rich dynamic structure and motions that also define human activity. We
propose an approach that hallucinates the unobserved future motion implied by a
single snapshot to help static-image action recognition. The key idea is to
learn a prior over short-term dynamics from thousands of unlabeled videos,
infer the anticipated optical flow on novel static images, and then train
discriminative models that exploit both streams of information. Our main
contributions are twofold. First, we devise an encoder-decoder convolutional
neural network and a novel optical flow encoding that can translate a static
image into an accurate flow map. Second, we show the power of hallucinated flow
for recognition, successfully transferring the learned motion into a standard
two-stream network for activity recognition. On seven datasets, we demonstrate
the power of the approach. It not only achieves state-of-the-art accuracy for
dense optical flow prediction, but also consistently enhances recognition of
actions and dynamic scenes.Comment: Published in CVPR 2018, project page:
http://vision.cs.utexas.edu/projects/im2flow