Skip to main content
Article thumbnail
Location of Repository

Generative 2D and 3D human pose estimation with vote distributions

By Jürgen Brauer, Wolfgang Hübner and Michael Arens


We address the problem of 2D and 3D human pose estimation using monocular camera information only. Generative approaches usually consist of two computationally demanding steps. First, different configurations of a complex 3D body model are projected into the image plane. Second, the projected synthetic person images and images of real persons are compared on a feature basis, like silhouettes or edges. In order to lower the computational costs of generative models, we propose to use vote distributions for anatomical landmarks generated by an Implicit Shape Model for each landmark. These vote distributions represent the image evidence in a more compact form and make the use of a simple 3D stick-figure body model possible since projected 3D marker points of the stick-figure can be compared with vote locations directly with negligible computational costs, which allows to consider near to half a million of different 3D poses per second on standard hardware and further to consider a huge set of 3D pose and configuration hypotheses in each frame. The approach is evaluated on the new Utrecht Multi-Person Motion (UMPM) benchmark with the result of an average joint angle reconstruction error of 8.0°

Year: 2012
DOI identifier: 10.1007/978-3-642-33179-4_45
OAI identifier:
Provided by: Fraunhofer-ePrints
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.