74,287 research outputs found
Robust Estimation of 3D Human Poses from a Single Image
Human pose estimation is a key step to action recognition. We propose a
method of estimating 3D human poses from a single image, which works in
conjunction with an existing 2D pose/joint detector. 3D pose estimation is
challenging because multiple 3D poses may correspond to the same 2D pose after
projection due to the lack of depth information. Moreover, current 2D pose
estimators are usually inaccurate which may cause errors in the 3D estimation.
We address the challenges in three ways: (i) We represent a 3D pose as a linear
combination of a sparse set of bases learned from 3D human skeletons. (ii) We
enforce limb length constraints to eliminate anthropomorphically implausible
skeletons. (iii) We estimate a 3D pose by minimizing the -norm error
between the projection of the 3D pose and the corresponding 2D detection. The
-norm loss term is robust to inaccurate 2D joint estimations. We use the
alternating direction method (ADM) to solve the optimization problem
efficiently. Our approach outperforms the state-of-the-arts on three benchmark
datasets
Learning a Disentangled Embedding for Monocular 3D Shape Retrieval and Pose Estimation
We propose a novel approach to jointly perform 3D shape retrieval and pose
estimation from monocular images.In order to make the method robust to
real-world image variations, e.g. complex textures and backgrounds, we learn an
embedding space from 3D data that only includes the relevant information,
namely the shape and pose. Our approach explicitly disentangles a shape vector
and a pose vector, which alleviates both pose bias for 3D shape retrieval and
categorical bias for pose estimation. We then train a CNN to map the images to
this embedding space, and then retrieve the closest 3D shape from the database
and estimate the 6D pose of the object. Our method achieves 10.3 median error
for pose estimation and 0.592 top-1-accuracy for category agnostic 3D object
retrieval on the Pascal3D+ dataset, outperforming the previous state-of-the-art
methods on both tasks
Single-Shot Multi-Person 3D Pose Estimation From Monocular RGB
We propose a new single-shot method for multi-person 3D pose estimation in
general scenes from a monocular RGB camera. Our approach uses novel
occlusion-robust pose-maps (ORPM) which enable full body pose inference even
under strong partial occlusions by other people and objects in the scene. ORPM
outputs a fixed number of maps which encode the 3D joint locations of all
people in the scene. Body part associations allow us to infer 3D pose for an
arbitrary number of people without explicit bounding box prediction. To train
our approach we introduce MuCo-3DHP, the first large scale training data set
showing real images of sophisticated multi-person interactions and occlusions.
We synthesize a large corpus of multi-person images by compositing images of
individual people (with ground truth from mutli-view performance capture). We
evaluate our method on our new challenging 3D annotated multi-person test set
MuPoTs-3D where we achieve state-of-the-art performance. To further stimulate
research in multi-person 3D pose estimation, we will make our new datasets, and
associated code publicly available for research purposes.Comment: International Conference on 3D Vision (3DV), 201
Robust extended Kalman filtering for camera pose tracking using 2D to 3D lines correspondences
International audienceIn this paper we present a new robust camera pose estimation approach based on 3D lines tracking. We used an Extended Kalman Filter (EKF) to incrementally update the camera pose in real-time. The principal contributions of our method includes first, the expansion of the RANSAC scheme in order to achieve a robust matching algorithm that associates 2D edges from the image with the 3D line segments from the input model. And second, a new framework for camera pose estimation using 2D-3D straight-lines within an EKF. Experimental results on real image sequences are presented to evaluate the performances and the feasibility of the proposed approach
3D-Aware Neural Body Fitting for Occlusion Robust 3D Human Pose Estimation
Regression-based methods for 3D human pose estimation directly predict the 3D
pose parameters from a 2D image using deep networks. While achieving
state-of-the-art performance on standard benchmarks, their performance degrades
under occlusion. In contrast, optimization-based methods fit a parametric body
model to 2D features in an iterative manner. The localized reconstruction loss
can potentially make them robust to occlusion, but they suffer from the 2D-3D
ambiguity.
Motivated by the recent success of generative models in rigid object pose
estimation, we propose 3D-aware Neural Body Fitting (3DNBF) - an approximate
analysis-by-synthesis approach to 3D human pose estimation with SOTA
performance and occlusion robustness. In particular, we propose a generative
model of deep features based on a volumetric human representation with Gaussian
ellipsoidal kernels emitting 3D pose-dependent feature vectors. The neural
features are trained with contrastive learning to become 3D-aware and hence to
overcome the 2D-3D ambiguity.
Experiments show that 3DNBF outperforms other approaches on both occluded and
standard benchmarks. Code is available at https://github.com/edz-o/3DNBFComment: ICCV 2023, project page: https://3dnbf.github.io
Neural Body Fitting: Unifying Deep Learning and Model-Based Human Pose and Shape Estimation
Direct prediction of 3D body pose and shape remains a challenge even for
highly parameterized deep learning models. Mapping from the 2D image space to
the prediction space is difficult: perspective ambiguities make the loss
function noisy and training data is scarce. In this paper, we propose a novel
approach (Neural Body Fitting (NBF)). It integrates a statistical body model
within a CNN, leveraging reliable bottom-up semantic body part segmentation and
robust top-down body model constraints. NBF is fully differentiable and can be
trained using 2D and 3D annotations. In detailed experiments, we analyze how
the components of our model affect performance, especially the use of part
segmentations as an explicit intermediate representation, and present a robust,
efficiently trainable framework for 3D human pose estimation from 2D images
with competitive results on standard benchmarks. Code will be made available at
http://github.com/mohomran/neural_body_fittingComment: 3DV 201
- …