32,655 research outputs found
DirectMHP: Direct 2D Multi-Person Head Pose Estimation with Full-range Angles
Existing head pose estimation (HPE) mainly focuses on single person with
pre-detected frontal heads, which limits their applications in real complex
scenarios with multi-persons. We argue that these single HPE methods are
fragile and inefficient for Multi-Person Head Pose Estimation (MPHPE) since
they rely on the separately trained face detector that cannot generalize well
to full viewpoints, especially for heads with invisible face areas. In this
paper, we focus on the full-range MPHPE problem, and propose a direct
end-to-end simple baseline named DirectMHP. Due to the lack of datasets
applicable to the full-range MPHPE, we firstly construct two benchmarks by
extracting ground-truth labels for head detection and head orientation from
public datasets AGORA and CMU Panoptic. They are rather challenging for having
many truncated, occluded, tiny and unevenly illuminated human heads. Then, we
design a novel end-to-end trainable one-stage network architecture by joint
regressing locations and orientations of multi-head to address the MPHPE
problem. Specifically, we regard pose as an auxiliary attribute of the head,
and append it after the traditional object prediction. Arbitrary pose
representation such as Euler angles is acceptable by this flexible design.
Then, we jointly optimize these two tasks by sharing features and utilizing
appropriate multiple losses. In this way, our method can implicitly benefit
from more surroundings to improve HPE accuracy while maintaining head detection
performance. We present comprehensive comparisons with state-of-the-art single
HPE methods on public benchmarks, as well as superior baseline results on our
constructed MPHPE datasets. Datasets and code are released in
https://github.com/hnuzhy/DirectMHP.Comment: 13 page
DeepCut: Joint Subset Partition and Labeling for Multi Person Pose Estimation
This paper considers the task of articulated human pose estimation of
multiple people in real world images. We propose an approach that jointly
solves the tasks of detection and pose estimation: it infers the number of
persons in a scene, identifies occluded body parts, and disambiguates body
parts between people in close proximity of each other. This joint formulation
is in contrast to previous strategies, that address the problem by first
detecting people and subsequently estimating their body pose. We propose a
partitioning and labeling formulation of a set of body-part hypotheses
generated with CNN-based part detectors. Our formulation, an instance of an
integer linear program, implicitly performs non-maximum suppression on the set
of part candidates and groups them to form configurations of body parts
respecting geometric and appearance constraints. Experiments on four different
datasets demonstrate state-of-the-art results for both single person and multi
person pose estimation. Models and code available at
http://pose.mpi-inf.mpg.de.Comment: Accepted at IEEE Conference on Computer Vision and Pattern
Recognition (CVPR 2016
- …