106,368 research outputs found

    EV-SIFT - An Extended Scale Invariant Face Recognition for Plastic Surgery Face Recognition

    Get PDF
    Automatic recognition of people faces many challenging problems which has experienced much attention due to many applications in different fields during recent years. Face recognition is one of those challenging problem which does not have much technique to solve all situations like pose, expression, and illumination changes, and/or ageing. Facial expression due to plastic surgery is one of the additional challenges which arise recently. This paper presents a new technique for accurate face recognition after the plastic surgery. This technique uses Entropy based SIFT (EV-SIFT) features for the recognition purpose. The corresponding feature extracts the key points and volume of the scale-space structure for which the information rate is determined. This provides least effect on uncertain variations in the face since the entropy is the higher order statistical feature. The corresponding EV-SIFT features are applied to the Support vector machine for classification. The normal SIFT feature extracts the key points based on the contrast of the image and the V- SIFT feature extracts the key points based on the volume of the structure. But the EV- SIFT method provides the contrast and volume information. This technique provides better performance when compare with PCA, normal SIFT and V-SIFT based feature extraction

    Facial Image Reconstruction from a Corrupted Image by Support Vector Data Description

    Get PDF
    This paper proposes a method of automatic facial reconstruction from a facial image partially corrupted by noise or occlusion. There are two key features of this method; the one is the automatic extraction of the correspondences between the corrupted input face and reference face without additional manual tasks; the other is the reconstruction of the complete facial information from corrupted facial information based on these correspondences. In this paper, we propose a non-iterative approach that can match multiple feature points in order to obtain the correspondences between the input image and the reference face. Furthermore, shape and texture of the whole face are reconstructed by SVDD (Support Vector Data Description) from the partial correspondences obtained by matching. The experimental results of facial image reconstructions show that the proposed SVDD-based reconstruction method gives smaller reconstruction errors for a facial image corrupted by Gaussian noise and occlusion than the existing linear projection reconstruction method with a regulation factor. The proposed method also reduces the mean intensity error per pixel by an average of 35 %, especially in the reconstruction of a facial image corrupted by Gaussian noise

    Face Recognition: Issues, Methods and Alternative Applications

    Get PDF
    Face recognition, as one of the most successful applications of image analysis, has recently gained significant attention. It is due to availability of feasible technologies, including mobile solutions. Research in automatic face recognition has been conducted since the 1960s, but the problem is still largely unsolved. Last decade has provided significant progress in this area owing to advances in face modelling and analysis techniques. Although systems have been developed for face detection and tracking, reliable face recognition still offers a great challenge to computer vision and pattern recognition researchers. There are several reasons for recent increased interest in face recognition, including rising public concern for security, the need for identity verification in the digital world, face analysis and modelling techniques in multimedia data management and computer entertainment. In this chapter, we have discussed face recognition processing, including major components such as face detection, tracking, alignment and feature extraction, and it points out the technical challenges of building a face recognition system. We focus on the importance of the most successful solutions available so far. The final part of the chapter describes chosen face recognition methods and applications and their potential use in areas not related to face recognition

    ROBUST DETECTION AND RECOGNITION SYSTEM BASED ON FACIAL EXTRACTION AND DECISION TREE

    Get PDF
    Automatic face recognition system is suggested in this work on the basis of appearance based features focusing on the whole image as well as local based features focusing on critical face points like eyes, mouth, and nose for generating further details. Face detection is the major phase in face recognition systems, certain method for face detection (Viola-Jones) has the ability to process images efficiently and achieve high rates of detection in real time systems. Dimension reduction and feature extraction approaches are going to be utilized on the cropped image caused by detection. One of the simple, yet effective ways for extracting image features is the Local Binary Pattern Histogram (LBPH), while the technique of Principal Component Analysis (PCA) was majorly utilized in pattern recognition. Also, the technique of Linear Discriminant Analysis (LDA) utilized for overcoming PCA limitations was efficiently used in face recognition. Furthermore, classification is going to be utilized following the feature extraction. The utilized machine learning algorithms are PART and J48. The suggested system is showing high accuracy for detection with Viola-Jones 98.75, whereas the features which are extracted by means of LDA with J48 provided the best results of (F-measure, Recall, and Precision)

    Facial Asymmetry Analysis Based on 3-D Dynamic Scans

    Get PDF
    Facial dysfunction is a fundamental symptom which often relates to many neurological illnesses, such as stroke, Bell’s palsy, Parkinson’s disease, etc. The current methods for detecting and assessing facial dysfunctions mainly rely on the trained practitioners which have significant limitations as they are often subjective. This paper presents a computer-based methodology of facial asymmetry analysis which aims for automatically detecting facial dysfunctions. The method is based on dynamic 3-D scans of human faces. The preliminary evaluation results testing on facial sequences from Hi4D-ADSIP database suggest that the proposed method is able to assist in the quantification and diagnosis of facial dysfunctions for neurological patients

    Distinguishing Posed and Spontaneous Smiles by Facial Dynamics

    Full text link
    Smile is one of the key elements in identifying emotions and present state of mind of an individual. In this work, we propose a cluster of approaches to classify posed and spontaneous smiles using deep convolutional neural network (CNN) face features, local phase quantization (LPQ), dense optical flow and histogram of gradient (HOG). Eulerian Video Magnification (EVM) is used for micro-expression smile amplification along with three normalization procedures for distinguishing posed and spontaneous smiles. Although the deep CNN face model is trained with large number of face images, HOG features outperforms this model for overall face smile classification task. Using EVM to amplify micro-expressions did not have a significant impact on classification accuracy, while the normalizing facial features improved classification accuracy. Unlike many manual or semi-automatic methodologies, our approach aims to automatically classify all smiles into either `spontaneous' or `posed' categories, by using support vector machines (SVM). Experimental results on large UvA-NEMO smile database show promising results as compared to other relevant methods.Comment: 16 pages, 8 figures, ACCV 2016, Second Workshop on Spontaneous Facial Behavior Analysi

    Relative Facial Action Unit Detection

    Full text link
    This paper presents a subject-independent facial action unit (AU) detection method by introducing the concept of relative AU detection, for scenarios where the neutral face is not provided. We propose a new classification objective function which analyzes the temporal neighborhood of the current frame to decide if the expression recently increased, decreased or showed no change. This approach is a significant change from the conventional absolute method which decides about AU classification using the current frame, without an explicit comparison with its neighboring frames. Our proposed method improves robustness to individual differences such as face scale and shape, age-related wrinkles, and transitions among expressions (e.g., lower intensity of expressions). Our experiments on three publicly available datasets (Extended Cohn-Kanade (CK+), Bosphorus, and DISFA databases) show significant improvement of our approach over conventional absolute techniques. Keywords: facial action coding system (FACS); relative facial action unit detection; temporal information;Comment: Accepted at IEEE Winter Conference on Applications of Computer Vision, Steamboat Springs Colorado, USA, 201
    corecore