477 research outputs found

    Improving Facial Analysis and Performance Driven Animation through Disentangling Identity and Expression

    Full text link
    We present techniques for improving performance driven facial animation, emotion recognition, and facial key-point or landmark prediction using learned identity invariant representations. Established approaches to these problems can work well if sufficient examples and labels for a particular identity are available and factors of variation are highly controlled. However, labeled examples of facial expressions, emotions and key-points for new individuals are difficult and costly to obtain. In this paper we improve the ability of techniques to generalize to new and unseen individuals by explicitly modeling previously seen variations related to identity and expression. We use a weakly-supervised approach in which identity labels are used to learn the different factors of variation linked to identity separately from factors related to expression. We show how probabilistic modeling of these sources of variation allows one to learn identity-invariant representations for expressions which can then be used to identity-normalize various procedures for facial expression analysis and animation control. We also show how to extend the widely used techniques of active appearance models and constrained local models through replacing the underlying point distribution models which are typically constructed using principal component analysis with identity-expression factorized representations. We present a wide variety of experiments in which we consistently improve performance on emotion recognition, markerless performance-driven facial animation and facial key-point tracking.Comment: to appear in Image and Vision Computing Journal (IMAVIS

    Generative Interpretation of Medical Images

    Get PDF

    Active illumination and appearance model for face alignment

    Get PDF

    Robust correlated and individual component analysis

    Get PDF
    © 1979-2012 IEEE.Recovering correlated and individual components of two, possibly temporally misaligned, sets of data is a fundamental task in disciplines such as image, vision, and behavior computing, with application to problems such as multi-modal fusion (via correlated components), predictive analysis, and clustering (via the individual ones). Here, we study the extraction of correlated and individual components under real-world conditions, namely i) the presence of gross non-Gaussian noise and ii) temporally misaligned data. In this light, we propose a method for the Robust Correlated and Individual Component Analysis (RCICA) of two sets of data in the presence of gross, sparse errors. We furthermore extend RCICA in order to handle temporal incongruities arising in the data. To this end, two suitable optimization problems are solved. The generality of the proposed methods is demonstrated by applying them onto 4 applications, namely i) heterogeneous face recognition, ii) multi-modal feature fusion for human behavior analysis (i.e., audio-visual prediction of interest and conflict), iii) face clustering, and iv) thetemporal alignment of facial expressions. Experimental results on 2 synthetic and 7 real world datasets indicate the robustness and effectiveness of the proposed methodson these application domains, outperforming other state-of-the-art methods in the field

    Modelling of Orthogonal Craniofacial Profiles

    Get PDF
    We present a fully-automatic image processing pipeline to build a set of 2D morphable models of three craniofacial profiles from orthogonal viewpoints, side view, front view and top view, using a set of 3D head surface images. Subjects in this dataset wear a close-fitting latex cap to reveal the overall skull shape. Texture-based 3D pose normalization and facial landmarking are applied to extract the profiles from 3D raw scans. Fully-automatic profile annotation, subdivision and registration methods are used to establish dense correspondence among sagittal profiles. The collection of sagittal profiles in dense correspondence are scaled and aligned using Generalised Procrustes Analysis (GPA), before applying principal component analysis to generate a morphable model. Additionally, we propose a new alternative alignment called the Ellipse Centre Nasion (ECN) method. Our model is used in a case study of craniosynostosis intervention outcome evaluation, and the evaluation reveals that the proposed model achieves state-of-the-art results. We make publicly available both the morphable models and the profile dataset used to construct it
    • …
    corecore