2,486 research outputs found
Supervised coordinate descent method with a 3D bilinear model for face alignment and tracking
Face alignment and tracking play important roles in facial performance capture. Existing data-driven methods for monocular videos suffer from large variations of
pose and expression. In this paper, we propose an efficient and robust method for this task by introducing a novel supervised coordinate descent method with 3D bilinear representation. Instead of learning the mapping between the whole parameters and image features directly with a cascaded regression framework in current methods,
we learn individual sets of parameters mappings separately step by step by a coordinate descent mean. Because different parameters make different contributions to the displacement of facial landmarks, our method is more discriminative to current whole-parameter cascaded regression methods. Benefiting from a 3D bilinear model learned from public databases, the proposed method can handle the head pose changes and extreme expressions out of plane better than other 2D-based methods. We present the reliable result of face tracking under various head poses and facial expressions on challenging video sequences collected online. The experimental results show that our method outperforms state-of-art data-driven methods
Face Alignment Assisted by Head Pose Estimation
In this paper we propose a supervised initialization scheme for cascaded face
alignment based on explicit head pose estimation. We first investigate the
failure cases of most state of the art face alignment approaches and observe
that these failures often share one common global property, i.e. the head pose
variation is usually large. Inspired by this, we propose a deep convolutional
network model for reliable and accurate head pose estimation. Instead of using
a mean face shape, or randomly selected shapes for cascaded face alignment
initialisation, we propose two schemes for generating initialisation: the first
one relies on projecting a mean 3D face shape (represented by 3D facial
landmarks) onto 2D image under the estimated head pose; the second one searches
nearest neighbour shapes from the training set according to head pose distance.
By doing so, the initialisation gets closer to the actual shape, which enhances
the possibility of convergence and in turn improves the face alignment
performance. We demonstrate the proposed method on the benchmark 300W dataset
and show very competitive performance in both head pose estimation and face
alignment.Comment: Accepted by BMVC201
Pose-Invariant 3D Face Alignment
Face alignment aims to estimate the locations of a set of landmarks for a
given image. This problem has received much attention as evidenced by the
recent advancement in both the methodology and performance. However, most of
the existing works neither explicitly handle face images with arbitrary poses,
nor perform large-scale experiments on non-frontal and profile face images. In
order to address these limitations, this paper proposes a novel face alignment
algorithm that estimates both 2D and 3D landmarks and their 2D visibilities for
a face image with an arbitrary pose. By integrating a 3D deformable model, a
cascaded coupled-regressor approach is designed to estimate both the camera
projection matrix and the 3D landmarks. Furthermore, the 3D model also allows
us to automatically estimate the 2D landmark visibilities via surface normals.
We gather a substantially larger collection of all-pose face images to evaluate
our algorithm and demonstrate superior performances than the state-of-the-art
methods
Fitting 3D Morphable Models using Local Features
In this paper, we propose a novel fitting method that uses local image
features to fit a 3D Morphable Model to 2D images. To overcome the obstacle of
optimising a cost function that contains a non-differentiable feature
extraction operator, we use a learning-based cascaded regression method that
learns the gradient direction from data. The method allows to simultaneously
solve for shape and pose parameters. Our method is thoroughly evaluated on
Morphable Model generated data and first results on real data are presented.
Compared to traditional fitting methods, which use simple raw features like
pixel colour or edge maps, local features have been shown to be much more
robust against variations in imaging conditions. Our approach is unique in that
we are the first to use local features to fit a Morphable Model.
Because of the speed of our method, it is applicable for realtime
applications. Our cascaded regression framework is available as an open source
library (https://github.com/patrikhuber).Comment: Submitted to ICIP 2015; 4 pages, 4 figure
- …