2,985 research outputs found
Deep representation learning for human motion prediction and classification
Generative models of 3D human motion are often restricted to a small number
of activities and can therefore not generalize well to novel movements or
applications. In this work we propose a deep learning framework for human
motion capture data that learns a generic representation from a large corpus of
motion capture data and generalizes well to new, unseen, motions. Using an
encoding-decoding network that learns to predict future 3D poses from the most
recent past, we extract a feature representation of human motion. Most work on
deep learning for sequence prediction focuses on video and speech. Since
skeletal data has a different structure, we present and evaluate different
network architectures that make different assumptions about time dependencies
and limb correlations. To quantify the learned features, we use the output of
different layers for action classification and visualize the receptive fields
of the network units. Our method outperforms the recent state of the art in
skeletal motion prediction even though these use action specific training data.
Our results show that deep feedforward networks, trained from a generic mocap
database, can successfully be used for feature extraction from human motion
data and that this representation can be used as a foundation for
classification and prediction.Comment: This paper is published at the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 201
Adversarial PoseNet: A Structure-aware Convolutional Network for Human Pose Estimation
For human pose estimation in monocular images, joint occlusions and
overlapping upon human bodies often result in deviated pose predictions. Under
these circumstances, biologically implausible pose predictions may be produced.
In contrast, human vision is able to predict poses by exploiting geometric
constraints of joint inter-connectivity. To address the problem by
incorporating priors about the structure of human bodies, we propose a novel
structure-aware convolutional network to implicitly take such priors into
account during training of the deep network. Explicit learning of such
constraints is typically challenging. Instead, we design discriminators to
distinguish the real poses from the fake ones (such as biologically implausible
ones). If the pose generator (G) generates results that the discriminator fails
to distinguish from real ones, the network successfully learns the priors.Comment: Fixed typos. 14 pages. Demonstration videos are
http://v.qq.com/x/page/c039862eira.html,
http://v.qq.com/x/page/f0398zcvkl5.html,
http://v.qq.com/x/page/w0398ei9m1r.htm
Vision-Based Multi-Task Manipulation for Inexpensive Robots Using End-To-End Learning from Demonstration
We propose a technique for multi-task learning from demonstration that trains
the controller of a low-cost robotic arm to accomplish several complex picking
and placing tasks, as well as non-prehensile manipulation. The controller is a
recurrent neural network using raw images as input and generating robot arm
trajectories, with the parameters shared across the tasks. The controller also
combines VAE-GAN-based reconstruction with autoregressive multimodal action
prediction. Our results demonstrate that it is possible to learn complex
manipulation tasks, such as picking up a towel, wiping an object, and
depositing the towel to its previous position, entirely from raw images with
direct behavior cloning. We show that weight sharing and reconstruction-based
regularization substantially improve generalization and robustness, and
training on multiple tasks simultaneously increases the success rate on all
tasks
- …