13 research outputs found
Multilinear Wavelets: A Statistical Shape Space for Human Faces
We present a statistical model for D human faces in varying expression,
which decomposes the surface of the face using a wavelet transform, and learns
many localized, decorrelated multilinear models on the resulting coefficients.
Using this model we are able to reconstruct faces from noisy and occluded D
face scans, and facial motion sequences. Accurate reconstruction of face shape
is important for applications such as tele-presence and gaming. The localized
and multi-scale nature of our model allows for recovery of fine-scale detail
while retaining robustness to severe noise and occlusion, and is
computationally efficient and scalable. We validate these properties
experimentally on challenging data in the form of static scans and motion
sequences. We show that in comparison to a global multilinear model, our model
better preserves fine detail and is computationally faster, while in comparison
to a localized PCA model, our model better handles variation in expression, is
faster, and allows us to fix identity parameters for a given subject.Comment: 10 pages, 7 figures; accepted to ECCV 201
A framework for fast low-power multi-sensor 3D scene capture and reconstruction
acceptedVersionPeer reviewe
Personalised modelling of facial action unit intensity
Facial expressions depend greatly on facial morphology and expressiveness of the observed person. Recent studies have shown great improvement of the personalized over non-personalized models in variety of facial expression related tasks, such as face and emotion recognition. However, in the context of facial action unit (AU) intensity estimation, personalized modeling has been scarcely investigated. In this paper, we propose a two-step approach for personalized modeling of facial AU intensity from spontaneously displayed facial expressions. In the first step, we perform facial feature decomposition using the proposed matrix decomposition algorithm that separates the person’s identity from facial expression. These two are then jointly modeled using the framework of Conditional Ordinal Random Fields, resulting in a personalized model for intensity estimation of AUs. Our experimental results show that the proposed personalized model largely outperforms non-personalized models for intensity estimation of AUs
Personalized Modeling of Facial Action Unit Intensity
Abstract. Facial expressions depend greatly on facial morphology and expres-siveness of the observed person. Recent studies have shown great improvement of the personalized over non-personalized models in variety of facial expres-sion related tasks, such as face and emotion recognition. However, in the con-text of facial action unit (AU) intensity estimation, personalized modeling has been scarcely investigated. In this paper, we propose a two-step approach for personalized modeling of facial AU intensity from spontaneously displayed fa-cial expressions. In the first step, we perform facial feature decomposition using the proposed matrix decomposition algorithm that separates the person’s iden-tity from facial expression. These two are then jointly modeled using the frame-work of Conditional Ordinal Random Fields, resulting in a personalized model for intensity estimation of AUs. Our experimental results show that the proposed personalized model largely outperforms non-personalized models for intensity estimation of AUs.
Statistical shape modelling for expression-invariant face analysis and recognition
Paper introduces a 3-D shape representation scheme for automatic face analysis and identification, and demonstrates its invariance to facial expression. The core of this scheme lies on the combination of statistical shape modelling and non-rigid deformation matching. While the former matches 3-D faces with facial expression, the latter provides a low-dimensional feature vector that controls the deformation of model for matching the shape of new input, thereby enabling robust identification of 3-D faces. The proposed scheme is also able to handle the pose variation without large part of missing data. To assist the establishment of dense point correspondences, a modified freeform-deformation based on B-spline warping is applied with the help of extracted landmarks. The hybrid iterative closest point method is introduced for matching the models and new data. The feasibility and effectiveness of the proposed method was investigated using standard publicly available Gavab and BU-3DFE datasets, which contain faces with expression and pose changes. The performance of the system was compared with that of nine benchmark approaches. The experimental results demonstrate that the proposed scheme provides a competitive solution for face recognition