752 research outputs found
Keep it SMPL: Automatic Estimation of 3D Human Pose and Shape from a Single Image
We describe the first method to automatically estimate the 3D pose of the
human body as well as its 3D shape from a single unconstrained image. We
estimate a full 3D mesh and show that 2D joints alone carry a surprising amount
of information about body shape. The problem is challenging because of the
complexity of the human body, articulation, occlusion, clothing, lighting, and
the inherent ambiguity in inferring 3D from 2D. To solve this, we first use a
recently published CNN-based method, DeepCut, to predict (bottom-up) the 2D
body joint locations. We then fit (top-down) a recently published statistical
body shape model, called SMPL, to the 2D joints. We do so by minimizing an
objective function that penalizes the error between the projected 3D model
joints and detected 2D joints. Because SMPL captures correlations in human
shape across the population, we are able to robustly fit it to very little
data. We further leverage the 3D model to prevent solutions that cause
interpenetration. We evaluate our method, SMPLify, on the Leeds Sports,
HumanEva, and Human3.6M datasets, showing superior pose accuracy with respect
to the state of the art.Comment: To appear in ECCV 201
AVA: A Video Dataset of Spatio-temporally Localized Atomic Visual Actions
This paper introduces a video dataset of spatio-temporally localized Atomic
Visual Actions (AVA). The AVA dataset densely annotates 80 atomic visual
actions in 430 15-minute video clips, where actions are localized in space and
time, resulting in 1.58M action labels with multiple labels per person
occurring frequently. The key characteristics of our dataset are: (1) the
definition of atomic visual actions, rather than composite actions; (2) precise
spatio-temporal annotations with possibly multiple annotations for each person;
(3) exhaustive annotation of these atomic actions over 15-minute video clips;
(4) people temporally linked across consecutive segments; and (5) using movies
to gather a varied set of action representations. This departs from existing
datasets for spatio-temporal action recognition, which typically provide sparse
annotations for composite actions in short video clips. We will release the
dataset publicly.
AVA, with its realistic scene and action complexity, exposes the intrinsic
difficulty of action recognition. To benchmark this, we present a novel
approach for action localization that builds upon the current state-of-the-art
methods, and demonstrates better performance on JHMDB and UCF101-24 categories.
While setting a new state of the art on existing datasets, the overall results
on AVA are low at 15.6% mAP, underscoring the need for developing new
approaches for video understanding.Comment: To appear in CVPR 2018. Check dataset page
https://research.google.com/ava/ for detail
Learning from Synthetic Humans
Estimating human pose, shape, and motion from images and videos are
fundamental challenges with many applications. Recent advances in 2D human pose
estimation use large amounts of manually-labeled training data for learning
convolutional neural networks (CNNs). Such data is time consuming to acquire
and difficult to extend. Moreover, manual labeling of 3D pose, depth and motion
is impractical. In this work we present SURREAL (Synthetic hUmans foR REAL
tasks): a new large-scale dataset with synthetically-generated but realistic
images of people rendered from 3D sequences of human motion capture data. We
generate more than 6 million frames together with ground truth pose, depth
maps, and segmentation masks. We show that CNNs trained on our synthetic
dataset allow for accurate human depth estimation and human part segmentation
in real RGB images. Our results and the new dataset open up new possibilities
for advancing person analysis using cheap and large-scale synthetic data.Comment: Appears in: 2017 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR 2017). 9 page
- …