2,947 research outputs found
Robust 3D face landmark localization based on local coordinate coding
In the 3D facial animation and synthesis community, input faces are usually required to be labeled by a set of landmarks for parameterization. Because of the variations in pose, expression and resolution, automatic 3D face landmark localization remains a challenge. In this paper, a novel landmark localization approach is presented. The approach is based on local coordinate coding (LCC) and consists of two stages. In the first stage, we perform nose detection, relying on the fact that the nose shape is usually invariant under the variations in the pose, expression, and resolution. Then, we use the iterative closest points algorithm to find a 3D affine transformation that aligns the input face to a reference face. In the second stage, we perform resampling to build correspondences between the input 3D face and the training faces. Then, an LCC-based localization algorithm is proposed to obtain the positions of the landmarks in the input face. Experimental results show that the proposed method is comparable to state of the art methods in terms of its robustness, flexibility, and accuracy
In-the-wild Facial Expression Recognition in Extreme Poses
In the computer research area, facial expression recognition is a hot
research problem. Recent years, the research has moved from the lab environment
to in-the-wild circumstances. It is challenging, especially under extreme
poses. But current expression detection systems are trying to avoid the pose
effects and gain the general applicable ability. In this work, we solve the
problem in the opposite approach. We consider the head poses and detect the
expressions within special head poses. Our work includes two parts: detect the
head pose and group it into one pre-defined head pose class; do facial
expression recognize within each pose class. Our experiments show that the
recognition results with pose class grouping are much better than that of
direct recognition without considering poses. We combine the hand-crafted
features, SIFT, LBP and geometric feature, with deep learning feature as the
representation of the expressions. The handcrafted features are added into the
deep learning framework along with the high level deep learning features. As a
comparison, we implement SVM and random forest to as the prediction models. To
train and test our methodology, we labeled the face dataset with 6 basic
expressions.Comment: Published on ICGIP201
Improving Facial Analysis and Performance Driven Animation through Disentangling Identity and Expression
We present techniques for improving performance driven facial animation,
emotion recognition, and facial key-point or landmark prediction using learned
identity invariant representations. Established approaches to these problems
can work well if sufficient examples and labels for a particular identity are
available and factors of variation are highly controlled. However, labeled
examples of facial expressions, emotions and key-points for new individuals are
difficult and costly to obtain. In this paper we improve the ability of
techniques to generalize to new and unseen individuals by explicitly modeling
previously seen variations related to identity and expression. We use a
weakly-supervised approach in which identity labels are used to learn the
different factors of variation linked to identity separately from factors
related to expression. We show how probabilistic modeling of these sources of
variation allows one to learn identity-invariant representations for
expressions which can then be used to identity-normalize various procedures for
facial expression analysis and animation control. We also show how to extend
the widely used techniques of active appearance models and constrained local
models through replacing the underlying point distribution models which are
typically constructed using principal component analysis with
identity-expression factorized representations. We present a wide variety of
experiments in which we consistently improve performance on emotion
recognition, markerless performance-driven facial animation and facial
key-point tracking.Comment: to appear in Image and Vision Computing Journal (IMAVIS
- …