23,433 research outputs found
HP-GAN: Probabilistic 3D human motion prediction via GAN
Predicting and understanding human motion dynamics has many applications,
such as motion synthesis, augmented reality, security, and autonomous vehicles.
Due to the recent success of generative adversarial networks (GAN), there has
been much interest in probabilistic estimation and synthetic data generation
using deep neural network architectures and learning algorithms.
We propose a novel sequence-to-sequence model for probabilistic human motion
prediction, trained with a modified version of improved Wasserstein generative
adversarial networks (WGAN-GP), in which we use a custom loss function designed
for human motion prediction. Our model, which we call HP-GAN, learns a
probability density function of future human poses conditioned on previous
poses. It predicts multiple sequences of possible future human poses, each from
the same input sequence but a different vector z drawn from a random
distribution. Furthermore, to quantify the quality of the non-deterministic
predictions, we simultaneously train a motion-quality-assessment model that
learns the probability that a given skeleton sequence is a real human motion.
We test our algorithm on two of the largest skeleton datasets: NTURGB-D and
Human3.6M. We train our model on both single and multiple action types. Its
predictive power for long-term motion estimation is demonstrated by generating
multiple plausible futures of more than 30 frames from just 10 frames of input.
We show that most sequences generated from the same input have more than 50\%
probabilities of being judged as a real human sequence. We will release all the
code used in this paper to Github
Long-Term Human Video Generation of Multiple Futures Using Poses
Predicting future human behavior from an input human video is a useful task
for applications such as autonomous driving and robotics. While most previous
works predict a single future, multiple futures with different behavior can
potentially occur. Moreover, if the predicted future is too short (e.g., less
than one second), it may not be fully usable by a human or other systems. In
this paper, we propose a novel method for future human pose prediction capable
of predicting multiple long-term futures. This makes the predictions more
suitable for real applications. Also, from the input video and the predicted
human behavior, we generate future videos. First, from an input human video, we
generate sequences of future human poses (i.e., the image coordinates of their
body-joints) via adversarial learning. Adversarial learning suffers from mode
collapse, which makes it difficult to generate a variety of multiple poses. We
solve this problem by utilizing two additional inputs to the generator to make
the outputs diverse, namely, a latent code (to reflect various behaviors) and
an attraction point (to reflect various trajectories). In addition, we generate
long-term future human poses using a novel approach based on unidimensional
convolutional neural networks. Last, we generate an output video based on the
generated poses for visualization. We evaluate the generated future poses and
videos using three criteria (i.e., realism, diversity and accuracy), and show
that our proposed method outperforms other state-of-the-art works
- …