45,544 research outputs found
VIBE: Video Inference for Human Body Pose and Shape Estimation
Human motion is fundamental to understanding behavior. Despite progress on
single-image 3D pose and shape estimation, existing video-based
state-of-the-art methods fail to produce accurate and natural motion sequences
due to a lack of ground-truth 3D motion data for training. To address this
problem, we propose Video Inference for Body Pose and Shape Estimation (VIBE),
which makes use of an existing large-scale motion capture dataset (AMASS)
together with unpaired, in-the-wild, 2D keypoint annotations. Our key novelty
is an adversarial learning framework that leverages AMASS to discriminate
between real human motions and those produced by our temporal pose and shape
regression networks. We define a temporal network architecture and show that
adversarial training, at the sequence level, produces kinematically plausible
motion sequences without in-the-wild ground-truth 3D labels. We perform
extensive experimentation to analyze the importance of motion and demonstrate
the effectiveness of VIBE on challenging 3D pose estimation datasets, achieving
state-of-the-art performance. Code and pretrained models are available at
https://github.com/mkocabas/VIBE.Comment: CVPR-2020 camera ready. Code is available at
https://github.com/mkocabas/VIB
Learning 3D Human Pose from Structure and Motion
3D human pose estimation from a single image is a challenging problem,
especially for in-the-wild settings due to the lack of 3D annotated data. We
propose two anatomically inspired loss functions and use them with a
weakly-supervised learning framework to jointly learn from large-scale
in-the-wild 2D and indoor/synthetic 3D data. We also present a simple temporal
network that exploits temporal and structural cues present in predicted pose
sequences to temporally harmonize the pose estimations. We carefully analyze
the proposed contributions through loss surface visualizations and sensitivity
analysis to facilitate deeper understanding of their working mechanism. Our
complete pipeline improves the state-of-the-art by 11.8% and 12% on Human3.6M
and MPI-INF-3DHP, respectively, and runs at 30 FPS on a commodity graphics
card.Comment: ECCV 2018. Project page: https://www.cse.iitb.ac.in/~rdabral/3DPose
Exploiting temporal information for 3D pose estimation
In this work, we address the problem of 3D human pose estimation from a
sequence of 2D human poses. Although the recent success of deep networks has
led many state-of-the-art methods for 3D pose estimation to train deep networks
end-to-end to predict from images directly, the top-performing approaches have
shown the effectiveness of dividing the task of 3D pose estimation into two
steps: using a state-of-the-art 2D pose estimator to estimate the 2D pose from
images and then mapping them into 3D space. They also showed that a
low-dimensional representation like 2D locations of a set of joints can be
discriminative enough to estimate 3D pose with high accuracy. However,
estimation of 3D pose for individual frames leads to temporally incoherent
estimates due to independent error in each frame causing jitter. Therefore, in
this work we utilize the temporal information across a sequence of 2D joint
locations to estimate a sequence of 3D poses. We designed a
sequence-to-sequence network composed of layer-normalized LSTM units with
shortcut connections connecting the input to the output on the decoder side and
imposed temporal smoothness constraint during training. We found that the
knowledge of temporal consistency improves the best reported result on
Human3.6M dataset by approximately and helps our network to recover
temporally consistent 3D poses over a sequence of images even when the 2D pose
detector fails
- …