170,183 research outputs found
Keep it SMPL: Automatic Estimation of 3D Human Pose and Shape from a Single Image
We describe the first method to automatically estimate the 3D pose of the
human body as well as its 3D shape from a single unconstrained image. We
estimate a full 3D mesh and show that 2D joints alone carry a surprising amount
of information about body shape. The problem is challenging because of the
complexity of the human body, articulation, occlusion, clothing, lighting, and
the inherent ambiguity in inferring 3D from 2D. To solve this, we first use a
recently published CNN-based method, DeepCut, to predict (bottom-up) the 2D
body joint locations. We then fit (top-down) a recently published statistical
body shape model, called SMPL, to the 2D joints. We do so by minimizing an
objective function that penalizes the error between the projected 3D model
joints and detected 2D joints. Because SMPL captures correlations in human
shape across the population, we are able to robustly fit it to very little
data. We further leverage the 3D model to prevent solutions that cause
interpenetration. We evaluate our method, SMPLify, on the Leeds Sports,
HumanEva, and Human3.6M datasets, showing superior pose accuracy with respect
to the state of the art.Comment: To appear in ECCV 201
Neural Body Fitting: Unifying Deep Learning and Model-Based Human Pose and Shape Estimation
Direct prediction of 3D body pose and shape remains a challenge even for
highly parameterized deep learning models. Mapping from the 2D image space to
the prediction space is difficult: perspective ambiguities make the loss
function noisy and training data is scarce. In this paper, we propose a novel
approach (Neural Body Fitting (NBF)). It integrates a statistical body model
within a CNN, leveraging reliable bottom-up semantic body part segmentation and
robust top-down body model constraints. NBF is fully differentiable and can be
trained using 2D and 3D annotations. In detailed experiments, we analyze how
the components of our model affect performance, especially the use of part
segmentations as an explicit intermediate representation, and present a robust,
efficiently trainable framework for 3D human pose estimation from 2D images
with competitive results on standard benchmarks. Code will be made available at
http://github.com/mohomran/neural_body_fittingComment: 3DV 201
Recommended from our members
Indirect deep structured learning for 3D human body shape and pose prediction
In this paper we present a novel method for 3D human body shape and pose prediction. Our work is motivated by the need to reduce our reliance on costly-to-obtain ground truth labels. To achieve this, we propose training an encoder-decoder network using a two step procedure as follows. During the first step, a decoder is trained to predict a body silhouette using SMPL (a statistical body shape model) parameters as an
input. During the second step, the whole network is trained on real image and corresponding silhouette pairs while the decoder is kept fixed. Such a procedure allows for an indirect learning of body shape and pose parameters from real images without requiring any ground truth parameter data.
Our key contributions include: (a) a novel encoder-decoder architecture for 3D body shape and pose prediction, (b) corresponding training procedure as well as (c) quantitative and qualitative analysis of the proposed method on artificial and real image datasets
PLIKS: A Pseudo-Linear Inverse Kinematic Solver for 3D Human Body Estimation
We consider the problem of reconstructing a 3D mesh of the human body from a
single 2D image as a model-in-the-loop optimization problem. Existing
approaches often regress the shape, pose, and translation parameters of a
parametric statistical model assuming a weak-perspective camera. In contrast,
we first estimate 2D pixel-aligned vertices in image space and propose PLIKS
(Pseudo-Linear Inverse Kinematic Solver) to regress the model parameters by
minimizing a linear least squares problem. PLIKS is a linearized formulation of
the parametric SMPL model, which provides an optimal pose and shape solution
from an adequate initialization. Our method is based on analytically
calculating an initial pose estimate from the network predicted 3D mesh
followed by PLIKS to obtain an optimal solution for the given constraints. As
our framework makes use of 2D pixel-aligned maps, it is inherently robust to
partial occlusion. To demonstrate the performance of the proposed approach, we
present quantitative evaluations which confirm that PLIKS achieves more
accurate reconstruction with greater than 10% improvement compared to other
state-of-the-art methods with respect to the standard 3D human pose and shape
benchmarks while also obtaining a reconstruction error improvement of 12.9 mm
on the newer AGORA dataset
Human Shape Estimation using Statistical Body Models
Human body estimation methods transform real-world observations into predictions about human body state. These estimation methods benefit a variety of health, entertainment, clothing, and ergonomics applications. State may include pose, overall body shape, and appearance.
Body state estimation is underconstrained by observations; ambiguity presents itself both in the form of missing data within observations, and also in the form of unknown correspondences between observations. We address this challenge with the use of a statistical body model: a data-driven virtual human. This helps resolve ambiguity in two ways. First, it fills in missing data, meaning that incomplete observations still result in complete shape estimates. Second, the model provides a statistically-motivated penalty for unlikely states, which enables more plausible body shape estimates.
Body state inference requires more than a body model; we therefore build obser- vation models whose output is compared with real observations. In this thesis, body state is estimated from three types of observations: 3D motion capture markers, depth and color images, and high-resolution 3D scans. In each case, a forward process is proposed which simulates observations. By comparing observations to the results of the forward process, state can be adjusted to minimize the difference between simulated and observed data. We use gradient-based methods because they are critical to the precise estimation of state with a large number of parameters.
The contributions of this work include three parts. First, we propose a method for the estimation of body shape, nonrigid deformation, and pose from 3D markers. Second, we present a concise approach to differentiating through the rendering process, with application to body shape estimation. And finally, we present a statistical body model trained from human body scans, with state-of-the-art fidelity, good runtime performance, and compatibility with existing animation packages
- …