9,258 research outputs found

    A Novel Approach to Face Recognition using Image Segmentation based on SPCA-KNN Method

    Get PDF
    In this paper we propose a novel method for face recognition using hybrid SPCA-KNN (SIFT-PCA-KNN) approach. The proposed method consists of three parts. The first part is based on preprocessing face images using Graph Based algorithm and SIFT (Scale Invariant Feature Transform) descriptor. Graph Based topology is used for matching two face images. In the second part eigen values and eigen vectors are extracted from each input face images. The goal is to extract the important information from the face data, to represent it as a set of new orthogonal variables called principal components. In the final part a nearest neighbor classifier is designed for classifying the face images based on the SPCA-KNN algorithm. The algorithm has been tested on 100 different subjects (15 images for each class). The experimental result shows that the proposed method has a positive effect on overall face recognition performance and outperforms other examined methods

    Improving Facial Analysis and Performance Driven Animation through Disentangling Identity and Expression

    Full text link
    We present techniques for improving performance driven facial animation, emotion recognition, and facial key-point or landmark prediction using learned identity invariant representations. Established approaches to these problems can work well if sufficient examples and labels for a particular identity are available and factors of variation are highly controlled. However, labeled examples of facial expressions, emotions and key-points for new individuals are difficult and costly to obtain. In this paper we improve the ability of techniques to generalize to new and unseen individuals by explicitly modeling previously seen variations related to identity and expression. We use a weakly-supervised approach in which identity labels are used to learn the different factors of variation linked to identity separately from factors related to expression. We show how probabilistic modeling of these sources of variation allows one to learn identity-invariant representations for expressions which can then be used to identity-normalize various procedures for facial expression analysis and animation control. We also show how to extend the widely used techniques of active appearance models and constrained local models through replacing the underlying point distribution models which are typically constructed using principal component analysis with identity-expression factorized representations. We present a wide variety of experiments in which we consistently improve performance on emotion recognition, markerless performance-driven facial animation and facial key-point tracking.Comment: to appear in Image and Vision Computing Journal (IMAVIS
    corecore