60,395 research outputs found

    Pattern Recognition of Surgically Altered Face Images Using Multi-Objective Evolutionary Algorithm

    Get PDF
    Plastic surgery has been recently coming up with a new and important aspect of face recognition alongside pose, expression, illumination, aging and disguise. Plastic surgery procedures changes the texture, appearance and the shape of different facial regions. Therefore, it is difficult for conventional face recognition algorithms to match a post-surgery face image with a pre-surgery face image. The non-linear variations produced by plastic surgery procedures are hard to be addressed using current face recognition algorithms. The multi-objective evolutionary algorithm is a novel approach for pattern recognition of surgically altered face images. The algorithms starts with generating non-disjoint face granules and two feature extractors EUCLBP (Extended Uniform Circular Local Binary Pattern) and SIFT (Scale Invariant Feature Transform), are used to extract discriminating facial information from face granules. DOI: 10.17762/ijritcc2321-8169.150316

    Region-based facial expression recognition in still images

    Get PDF
    In Facial Expression Recognition Systems (FERS), only particular regions of the face are utilized for discrimination. The areas of the eyes, eyebrows, nose, and mouth are the most important features in any FERS. Applying facial features descriptors such as the local binary pattern (LBP) on such areas results in an effective and efficient FERS. In this paper, we propose an automatic facial expression recognition system. Unlike other systems, it detects and extracts the informative and discriminant regions of the face (i.e., eyes, nose, and mouth areas) using Haar-feature based cascade classifiers and these region-based features are stored into separate image files as a preprocessing step. Then, LBP is applied to these image files for facial texture representation and a feature-vector per subject is obtained by concatenating the resulting LBP histograms of the decomposed region-based features. The one-vs-rest SVM, which is a popular multi-classification method, is employed with the Radial Basis Function (RBF) for facial expression classification. Experimental results show that this approach yields good performance for both frontal and near-frontal facial images in terms of accuracy and time complexity. Cohn-Kanade and JAFFE, which are benchmark facial expression datasets, are used to evaluate this approach

    Dealing with occlusions in face recognition by region-based fusion

    Full text link
    The last research efforts made in the face recognition community have been focusing in improving the robustness of systems under different variability conditions like change of pose, expression, illumination, low resolution and occlusions. Occlusions are also a manner of evading identification, which is commonly used when committing crimes or thefts. In this work we propose an approach based on the fusion of non occluded facial regions that is robust to occlusions in a simple and effective manner. We evaluate the region-based approach in three face recognition systems: Face++ (a commercial software based on CNN) and two advancements over LBP systems, one considering multiple scales and other considering a larger number of facial regions. We report experiments based on the ARFace database and prove the robustness of using only non-occluded facial regions, the effectiveness of a large number of regions and the limitations of the commercial system when dealing with occlusionsThis work has been partially supported by project Cogni- Metrics TEC2015-70627-R (MINECO/FEDER). E. Gonzalez- Sosa is supported by a PhD scholarship from Universidad Autonoma de Madri

    Unimodal Multi-Feature Fusion and one-dimensional Hidden Markov Models for Low-Resolution Face Recognition

    Get PDF
    The objective of low-resolution face recognition is to identify faces from small size or poor quality images with varying pose, illumination, expression, etc. In this work, we propose a robust low face recognition technique based on one-dimensional Hidden Markov Models. Features of each facial image are extracted using three steps: firstly, both Gabor filters and Histogram of Oriented Gradients (HOG) descriptor are calculated. Secondly, the size of these features is reduced using the Linear Discriminant Analysis (LDA) method in order to remove redundant information. Finally, the reduced features are combined using Canonical Correlation Analysis (CCA) method. Unlike existing techniques using HMMs, in which authors consider each state to represent one facial region (eyes, nose, mouth, etc), the proposed system employs 1D-HMMs without any prior knowledge about the localization of interest regions in the facial image. Performance of the proposed method will be measured using the AR database

    Automatic recognition of facial expressions

    Get PDF
    Facial expression is a visible manifestation of the affective state, cognitive activity, intention, personality and psychopathology of a person; it not only expresses our expressions, but also provides important communicative cues during social interaction. Expression recognition can be embedded into a face recognition system to improve its robustness. In a real-time face recognition system where a series of images of an individual are captured, facial expression recognition (FER) module picks the one which is most similar to a neutral expression for recognition, because normally a face recognition system is trained using neutral expression images. In the case where only one image is available, the estimated expression can be used either to decide which classifier to choose or to add some kind of compensation. In a human-computer interaction (HCI), expression is an input of great potential in terms of communicative cues. This is especially true in voice-activated control systems. This implies an FER module can markedly improve the performance of such systems. Customer's facial expressions can also be collected by service providers as implicit user feedback to improve their service. Compared with a conventional questionnaire-based method, this should be more reliable and furthermore, has virtually no cost. The main challenge for FER system is to attain the highest possible classification rate for the recognition of six expressions (Anger, Disgust, Fear, Happy, Sad and Surprise). The other challenges are the illumination variation, rotation and noise. In this thesis, several innovative methods based on image processing and pattern recognition theory have been devised and implemented. The main contributions of algorithms and advanced modelling techniques are summarized as follows. 1) A new feature extraction approach called HLAC-like (higher-order local autocorrelation-like) features has been presented to detect and to extract facial features from face images. 2) An innovative design is introduced with the ability to detect cases using face feature extraction method based on orthogonal moments for images with noise and/or rotation. Using this technique, the expression from face images with high levels of noise and even rotation has been recognized properly. 3) A facial expression recognition system is designed based on the combination region. In this system, a method called hybrid face regions (HFR) according to the combined part of an image is presented. Using this method, the features are extracted from the components of the face (eyes, nose and mouth) and then the expression is identified based on these features. 4) A novel classification methodology has been proposed based on structural similarity algorithm in facial expression recognition scenarios. 5) A new methodology for expression recognition is presented using colour facial images based on multi-linear image analysis. In this scenario, the colour images are unfolded to two dimensional (2-D) matrix based on multi-linear algebra and then classified based on multi-linear discriminant analysis (LDA) classifier. Furthermore, the colour effect on facial images of various resolutions is studied for FER system. The addressed issues are challenging problems and are substantial for developing a facial expression recognition system

    Expression Recognition with Deep Features Extracted from Holistic and Part-based Models

    Get PDF
    International audienceFacial expression recognition aims to accurately interpret facial muscle movements in affective states (emotions). Previous studies have proposed holistic analysis of the face, as well as the extraction of features pertained only to specific facial regions towards expression recognition. While classically the latter have shown better performances, we here explore this in the context of deep learning. In particular, this work provides a performance comparison of holistic and part-based deep learning models for expression recognition. In addition, we showcase the effectiveness of skip connections, which allow a network to infer from both low and high-level feature maps. Our results suggest that holistic models outperform part-based models, in the absence of skip connections. Finally, based on our findings, we propose a data augmentation scheme, which we incorporate in a part-based model. The proposed multi-face multi-part (MFMP) model leverages the wide information from part-based data augmentation, where we train the network using the facial parts extracted from different face samples of the same expression class. Extensive experiments on publicly available datasets show a significant improvement of facial expression classification with the proposed MFMP framework

    Human Centric Facial Expression Recognition

    Get PDF
    Facial expression recognition (FER) is an area of active research, both in computer science and in behavioural science. Across these domains there is evidence to suggest that humans and machines find it easier to recognise certain emotions, for example happiness, in comparison to others. Recent behavioural studies have explored human perceptions of emotion further, by evaluating the relative contribution of features in the face when evaluating human sensitivity to emotion. It has been identified that certain facial regions have more salient features for certain expressions of emotion, especially when emotions are subtle in nature. For example, it is easier to detect fearful expressions when the eyes are expressive. Using this observation as a starting point for analysis, we similarly examine the effectiveness with which knowledge of facial feature saliency may be integrated into current approaches to automated FER. Specifically, we compare and evaluate the accuracy of ‘full-face’ versus upper and lower facial area convolutional neural network (CNN) modelling for emotion recognition in static images, and propose a human centric CNN hierarchy which uses regional image inputs to leverage current understanding of how humans recognise emotions across the face. Evaluations using the CK+ dataset demonstrate that our hierarchy can enhance classification accuracy in comparison to individual CNN architectures, achieving overall true positive classification in 93.3% of cases
    corecore