21 research outputs found
Adaptive Graphical Model Network for 2D Handpose Estimation
In this paper, we propose a new architecture called Adaptive Graphical Model
Network (AGMN) to tackle the task of 2D hand pose estimation from a monocular
RGB image. The AGMN consists of two branches of deep convolutional neural
networks for calculating unary and pairwise potential functions, followed by a
graphical model inference module for integrating unary and pairwise potentials.
Unlike existing architectures proposed to combine DCNNs with graphical models,
our AGMN is novel in that the parameters of its graphical model are conditioned
on and fully adaptive to individual input images. Experiments show that our
approach outperforms the state-of-the-art method used in 2D hand keypoints
estimation by a notable margin on two public datasets.Comment: 30th British Machine Vision Conference (BMVC
GANerated Hands for Real-time 3D Hand Tracking from Monocular RGB
We address the highly challenging problem of real-time 3D hand tracking based
on a monocular RGB-only sequence. Our tracking method combines a convolutional
neural network with a kinematic 3D hand model, such that it generalizes well to
unseen data, is robust to occlusions and varying camera viewpoints, and leads
to anatomically plausible as well as temporally smooth hand motions. For
training our CNN we propose a novel approach for the synthetic generation of
training data that is based on a geometrically consistent image-to-image
translation network. To be more specific, we use a neural network that
translates synthetic images to "real" images, such that the so-generated images
follow the same statistical distribution as real-world hand images. For
training this translation network we combine an adversarial loss and a
cycle-consistency loss with a geometric consistency loss in order to preserve
geometric properties (such as hand pose) during translation. We demonstrate
that our hand tracking system outperforms the current state-of-the-art on
challenging RGB-only footage