5 research outputs found
xR-EgoPose: Egocentric 3D Human Pose from an HMD Camera
We present a new solution to egocentric 3D body pose estimation from monocular images captured from a downward looking fish-eye camera installed on the rim of a head mounted virtual reality device. This unusual viewpoint, just 2 cm away from the user's face, leads to images with unique visual appearance, characterized by severe self-occlusions and strong perspective distortions that result in a drastic difference in resolution between lower and upper body. Our contribution is two-fold. Firstly, we propose a new encoder-decoder architecture with a novel dual branch decoder designed specifically to account for the varying uncertainty in the 2D joint locations. Our quantitative evaluation, both on synthetic and real-world datasets, shows that our strategy leads to substantial improvements in accuracy over state of the art egocentric pose estimation approaches. Our second contribution is a new large-scale photorealistic synthetic dataset - xR-EgoPose - offering 383K frames of high quality renderings ofpeople with a diversity of skin tones, body shapes, clothing, in a variety of backgrounds and lighting conditions, performing a range of actions. Our experiments show that the high variability in our new synthetic training corpus leads to good generalization to real world footage and to state of the art results on real world datasets with ground truth. Moreover, an evaluation on the Human3.6M benchmark shows that the performance of our method is on par with top performing approaches on the more classic problem of 3D human pose from a third person viewpoint
{SelfPose}: {3D} Egocentric Pose Estimation from a Headset Mounted Camera
We present a solution to egocentric 3D body pose estimation from monocular
images captured from downward looking fish-eye cameras installed on the rim of
a head mounted VR device. This unusual viewpoint leads to images with unique
visual appearance, with severe self-occlusions and perspective distortions that
result in drastic differences in resolution between lower and upper body. We
propose an encoder-decoder architecture with a novel multi-branch decoder
designed to account for the varying uncertainty in 2D predictions. The
quantitative evaluation, on synthetic and real-world datasets, shows that our
strategy leads to substantial improvements in accuracy over state of the art
egocentric approaches. To tackle the lack of labelled data we also introduced a
large photo-realistic synthetic dataset. xR-EgoPose offers high quality
renderings of people with diverse skintones, body shapes and clothing,
performing a range of actions. Our experiments show that the high variability
in our new synthetic training corpus leads to good generalization to real world
footage and to state of theart results on real world datasets with ground
truth. Moreover, an evaluation on the Human3.6M benchmark shows that the
performance of our method is on par with top performing approaches on the more
classic problem of 3D human pose from a third person viewpoint.Comment: 14 pages. arXiv admin note: substantial text overlap with
arXiv:1907.1004