8,912 research outputs found

    Improving Facial Analysis and Performance Driven Animation through Disentangling Identity and Expression

    Full text link
    We present techniques for improving performance driven facial animation, emotion recognition, and facial key-point or landmark prediction using learned identity invariant representations. Established approaches to these problems can work well if sufficient examples and labels for a particular identity are available and factors of variation are highly controlled. However, labeled examples of facial expressions, emotions and key-points for new individuals are difficult and costly to obtain. In this paper we improve the ability of techniques to generalize to new and unseen individuals by explicitly modeling previously seen variations related to identity and expression. We use a weakly-supervised approach in which identity labels are used to learn the different factors of variation linked to identity separately from factors related to expression. We show how probabilistic modeling of these sources of variation allows one to learn identity-invariant representations for expressions which can then be used to identity-normalize various procedures for facial expression analysis and animation control. We also show how to extend the widely used techniques of active appearance models and constrained local models through replacing the underlying point distribution models which are typically constructed using principal component analysis with identity-expression factorized representations. We present a wide variety of experiments in which we consistently improve performance on emotion recognition, markerless performance-driven facial animation and facial key-point tracking.Comment: to appear in Image and Vision Computing Journal (IMAVIS

    Facial Component Detection in Thermal Imagery

    Get PDF
    This paper studies the problem of detecting facial components in thermal imagery (specifically eyes, nostrils and mouth). One of the immediate goals is to enable the automatic registration of facial thermal images. The detection of eyes and nostrils is performed using Haar features and the GentleBoost algorithm, which are shown to provide superior detection rates. The detection of the mouth is based on the detections of the eyes and the nostrils and is performed using measures of entropy and self similarity. The results show that reliable facial component detection is feasible using this methodology, getting a correct detection rate for both eyes and nostrils of 0.8. A correct eyes and nostrils detection enables a correct detection of the mouth in 65% of closed-mouth test images and in 73% of open-mouth test images

    A New Computer-aided Technique for Planning the Aesthetic Outcome of Plastic Surgery

    Get PDF
    Plastic surgery plays a major role in today health care. Planning plastic face surgery requires dealing with the elusive concept of attractiveness for evaluating feasible beautification of a particular face. The existing computer tools essentially allow to manually warp 2D images or 3D face scans, in order to produce images simulating possible surgery outcomes. How to manipulate faces, as well as the evaluation of the results, are left to the surgeon's judgement. We propose a new quantitative approach able to automatically suggest effective patient-specific improvements of facial attractiveness. The general idea is to compare the face of the patient with a large database of attractive faces, excluding the facial feature to be improved. Then, the feature of the faces more similar is applied, with a suitable morphing, to the face of the patient. In this paper we present a first application of the general idea in the field of nose surgery. Aesthetically effective rhinoplasty is suggested on the base of the entire face profile, a very important 2D feature for rating face attractivenes
    corecore