13,566 research outputs found

    Automatic recognition of facial expressions

    Get PDF
    Facial expression is a visible manifestation of the affective state, cognitive activity, intention, personality and psychopathology of a person; it not only expresses our expressions, but also provides important communicative cues during social interaction. Expression recognition can be embedded into a face recognition system to improve its robustness. In a real-time face recognition system where a series of images of an individual are captured, facial expression recognition (FER) module picks the one which is most similar to a neutral expression for recognition, because normally a face recognition system is trained using neutral expression images. In the case where only one image is available, the estimated expression can be used either to decide which classifier to choose or to add some kind of compensation. In a human-computer interaction (HCI), expression is an input of great potential in terms of communicative cues. This is especially true in voice-activated control systems. This implies an FER module can markedly improve the performance of such systems. Customer's facial expressions can also be collected by service providers as implicit user feedback to improve their service. Compared with a conventional questionnaire-based method, this should be more reliable and furthermore, has virtually no cost. The main challenge for FER system is to attain the highest possible classification rate for the recognition of six expressions (Anger, Disgust, Fear, Happy, Sad and Surprise). The other challenges are the illumination variation, rotation and noise. In this thesis, several innovative methods based on image processing and pattern recognition theory have been devised and implemented. The main contributions of algorithms and advanced modelling techniques are summarized as follows. 1) A new feature extraction approach called HLAC-like (higher-order local autocorrelation-like) features has been presented to detect and to extract facial features from face images. 2) An innovative design is introduced with the ability to detect cases using face feature extraction method based on orthogonal moments for images with noise and/or rotation. Using this technique, the expression from face images with high levels of noise and even rotation has been recognized properly. 3) A facial expression recognition system is designed based on the combination region. In this system, a method called hybrid face regions (HFR) according to the combined part of an image is presented. Using this method, the features are extracted from the components of the face (eyes, nose and mouth) and then the expression is identified based on these features. 4) A novel classification methodology has been proposed based on structural similarity algorithm in facial expression recognition scenarios. 5) A new methodology for expression recognition is presented using colour facial images based on multi-linear image analysis. In this scenario, the colour images are unfolded to two dimensional (2-D) matrix based on multi-linear algebra and then classified based on multi-linear discriminant analysis (LDA) classifier. Furthermore, the colour effect on facial images of various resolutions is studied for FER system. The addressed issues are challenging problems and are substantial for developing a facial expression recognition system

    Time-Efficient Hybrid Approach for Facial Expression Recognition

    Get PDF
    Facial expression recognition is an emerging research area for improving human and computer interaction. This research plays a significant role in the field of social communication, commercial enterprise, law enforcement, and other computer interactions. In this paper, we propose a time-efficient hybrid design for facial expression recognition, combining image pre-processing steps and different Convolutional Neural Network (CNN) structures providing better accuracy and greatly improved training time. We are predicting seven basic emotions of human faces: sadness, happiness, disgust, anger, fear, surprise and neutral. The model performs well regarding challenging facial expression recognition where the emotion expressed could be one of several due to their quite similar facial characteristics such as anger, disgust, and sadness. The experiment to test the model was conducted across multiple databases and different facial orientations, and to the best of our knowledge, the model provided an accuracy of about 89.58% for KDEF dataset, 100% accuracy for JAFFE dataset and 71.975% accuracy for combined (KDEF + JAFFE + SFEW) dataset across these different scenarios. Performance evaluation was done by cross-validation techniques to avoid bias towards a specific set of images from a database

    Facial emotion recognition using min-max similarity classifier

    Full text link
    Recognition of human emotions from the imaging templates is useful in a wide variety of human-computer interaction and intelligent systems applications. However, the automatic recognition of facial expressions using image template matching techniques suffer from the natural variability with facial features and recording conditions. In spite of the progress achieved in facial emotion recognition in recent years, the effective and computationally simple feature selection and classification technique for emotion recognition is still an open problem. In this paper, we propose an efficient and straightforward facial emotion recognition algorithm to reduce the problem of inter-class pixel mismatch during classification. The proposed method includes the application of pixel normalization to remove intensity offsets followed-up with a Min-Max metric in a nearest neighbor classifier that is capable of suppressing feature outliers. The results indicate an improvement of recognition performance from 92.85% to 98.57% for the proposed Min-Max classification method when tested on JAFFE database. The proposed emotion recognition technique outperforms the existing template matching methods

    3-D Face Analysis and Identification Based on Statistical Shape Modelling

    Get PDF
    This paper presents an effective method of statistical shape representation for automatic face analysis and identification in 3-D. The method combines statistical shape modelling techniques and the non-rigid deformation matching scheme. This work is distinguished by three key contributions. The first is the introduction of a new 3-D shape registration method using hierarchical landmark detection and multilevel B-spline warping technique, which allows accurate dense correspondence search for statistical model construction. The second is the shape representation approach, based on Laplacian Eigenmap, which provides a nonlinear submanifold that links underlying structure of facial data. The third contribution is a hybrid method for matching the statistical model and test dataset which controls the levels of the model’s deformation at different matching stages and so increases chance of the successful matching. The proposed method is tested on the public database, BU-3DFE. Results indicate that it can achieve extremely high verification rates in a series of tests, thus providing real-world practicality

    Review of Face Detection Systems Based Artificial Neural Networks Algorithms

    Get PDF
    Face detection is one of the most relevant applications of image processing and biometric systems. Artificial neural networks (ANN) have been used in the field of image processing and pattern recognition. There is lack of literature surveys which give overview about the studies and researches related to the using of ANN in face detection. Therefore, this research includes a general review of face detection studies and systems which based on different ANN approaches and algorithms. The strengths and limitations of these literature studies and systems were included also.Comment: 16 pages, 12 figures, 1 table, IJMA Journa

    Micro-expression Recognition using Spatiotemporal Texture Map and Motion Magnification

    Get PDF
    Micro-expressions are short-lived, rapid facial expressions that are exhibited by individuals when they are in high stakes situations. Studying these micro-expressions is important as these cannot be modified by an individual and hence offer us a peek into what the individual is actually feeling and thinking as opposed to what he/she is trying to portray. The spotting and recognition of micro-expressions has applications in the fields of criminal investigation, psychotherapy, education etc. However due to micro-expressions’ short-lived and rapid nature; spotting, recognizing and classifying them is a major challenge. In this paper, we design a hybrid approach for spotting and recognizing micro-expressions by utilizing motion magnification using Eulerian Video Magnification and Spatiotemporal Texture Map (STTM). The validation of this approach was done on the spontaneous micro-expression dataset, CASMEII in comparison with the baseline. This approach achieved an accuracy of 80% viz. an increase by 5% as compared to the existing baseline by utilizing 10-fold cross validation using Support Vector Machines (SVM) with a linear kernel
    • …
    corecore