321,454 research outputs found
2D-to-3D facial expression transfer
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Automatically changing the expression and physical features of a face from an input image is a topic that has been traditionally tackled in a 2D domain. In this paper, we bring this problem to 3D and propose a framework that given an input RGB video of a human face under a neutral expression, initially computes his/her 3D shape and then performs a transfer to a new and potentially non-observed expression. For this purpose, we parameterize the rest shape --obtained from standard factorization approaches over the input video-- using a triangular mesh which is further clustered into larger macro-segments. The expression transfer problem is then posed as a direct mapping between this shape and a source shape, such as the blend shapes of an off-the-shelf 3D dataset of human facial expressions. The mapping is resolved to be geometrically consistent between 3D models by requiring points in specific regions to map on semantic equivalent regions. We validate the approach on several synthetic and real examples of input faces that largely differ from the source shapes, yielding very realistic expression transfers even in cases with topology changes, such as a synthetic video sequence of a single-eyed cyclops.Peer ReviewedPostprint (author's final draft
A Survey of the Trends in Facial and Expression Recognition Databases and Methods
Automated facial identification and facial expression recognition have been
topics of active research over the past few decades. Facial and expression
recognition find applications in human-computer interfaces, subject tracking,
real-time security surveillance systems and social networking. Several holistic
and geometric methods have been developed to identify faces and expressions
using public and local facial image databases. In this work we present the
evolution in facial image data sets and the methodologies for facial
identification and recognition of expressions such as anger, sadness,
happiness, disgust, fear and surprise. We observe that most of the earlier
methods for facial and expression recognition aimed at improving the
recognition rates for facial feature-based methods using static images.
However, the recent methodologies have shifted focus towards robust
implementation of facial/expression recognition from large image databases that
vary with space (gathered from the internet) and time (video recordings). The
evolution trends in databases and methodologies for facial and expression
recognition can be useful for assessing the next-generation topics that may
have applications in security systems or personal identification systems that
involve "Quantitative face" assessments.Comment: 16 pages, 4 figures, 3 tables, International Journal of Computer
Science and Engineering Survey, October, 201
First report of generalized face processing difficulties in möbius sequence.
Reverse simulation models of facial expression recognition suggest that we recognize the emotions of others by running implicit motor programmes responsible for the production of that expression. Previous work has tested this theory by examining facial expression recognition in participants with Möbius sequence, a condition characterized by congenital bilateral facial paralysis. However, a mixed pattern of findings has emerged, and it has not yet been tested whether these individuals can imagine facial expressions, a process also hypothesized to be underpinned by proprioceptive feedback from the face. We investigated this issue by examining expression recognition and imagery in six participants with Möbius sequence, and also carried out tests assessing facial identity and object recognition, as well as basic visual processing. While five of the six participants presented with expression recognition impairments, only one was impaired at the imagery of facial expressions. Further, five participants presented with other difficulties in the recognition of facial identity or objects, or in lower-level visual processing. We discuss the implications of our findings for the reverse simulation model, and suggest that facial identity recognition impairments may be more severe in the condition than has previously been noted
Automatic facial expression tracking for 4D range scans
This paper presents a fully automatic approach of spatio-temporal facial expression tracking for 4D range scans without any manual interventions (such as specifying landmarks). The approach consists of three steps: rigid registration, facial model reconstruction, and facial expression tracking. A Scaling Iterative Closest Points (SICP) algorithm is introduced to compute the optimal rigid registration between a template facial model and a range scan with consideration of the scale problem. A deformable model, physically based on thin shells, is proposed to faithfully reconstruct the facial surface and texture from that range data. And then the reconstructed facial model is used to track facial expressions presented in a sequence of range scans by the deformable model
Regression-based Multi-View Facial Expression Recognition
We present a regression-based scheme for multi-view facial expression recognition based on 2-D geometric features. We address the problem by mapping facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the expressions can be performed using a state-of-the-art facial expression recognition method. To learn the mapping functions we investigate four regression models: Linear Regression (LR), Support Vector Regression (SVR), Relevance Vector Regression (RVR) and Gaussian Process Regression (GPR). Our extensive experiments on the CMU Multi-PIE facial expression database show that the proposed scheme outperforms view-specific classifiers by utilizing considerably less training data
- …
