84 research outputs found

    Micro-expression Recognition using Spatiotemporal Texture Map and Motion Magnification

    Get PDF
    Micro-expressions are short-lived, rapid facial expressions that are exhibited by individuals when they are in high stakes situations. Studying these micro-expressions is important as these cannot be modified by an individual and hence offer us a peek into what the individual is actually feeling and thinking as opposed to what he/she is trying to portray. The spotting and recognition of micro-expressions has applications in the fields of criminal investigation, psychotherapy, education etc. However due to micro-expressions’ short-lived and rapid nature; spotting, recognizing and classifying them is a major challenge. In this paper, we design a hybrid approach for spotting and recognizing micro-expressions by utilizing motion magnification using Eulerian Video Magnification and Spatiotemporal Texture Map (STTM). The validation of this approach was done on the spontaneous micro-expression dataset, CASMEII in comparison with the baseline. This approach achieved an accuracy of 80% viz. an increase by 5% as compared to the existing baseline by utilizing 10-fold cross validation using Support Vector Machines (SVM) with a linear kernel

    Sparsity in Dynamics of Spontaneous Subtle Emotions: Analysis \& Application

    Full text link
    Spontaneous subtle emotions are expressed through micro-expressions, which are tiny, sudden and short-lived dynamics of facial muscles; thus poses a great challenge for visual recognition. The abrupt but significant dynamics for the recognition task are temporally sparse while the rest, irrelevant dynamics, are temporally redundant. In this work, we analyze and enforce sparsity constrains to learn significant temporal and spectral structures while eliminate irrelevant facial dynamics of micro-expressions, which would ease the challenge in the visual recognition of spontaneous subtle emotions. The hypothesis is confirmed through experimental results of automatic spontaneous subtle emotion recognition with several sparsity levels on CASME II and SMIC, the only two publicly available spontaneous subtle emotion databases. The overall performances of the automatic subtle emotion recognition are boosted when only significant dynamics are preserved from the original sequences.Comment: IEEE Transaction of Affective Computing (2016

    Effective recognition of facial micro-expressions with video motion magnification

    Get PDF
    Facial expression recognition has been intensively studied for decades, notably by the psychology community and more recently the pattern recognition community. What is more challenging, and the subject of more recent research, is the problem of recognizing subtle emotions exhibited by so-called micro-expressions. Recognizing a micro-expression is substantially more challenging than conventional expression recognition because these micro-expressions are only temporally exhibited in a fraction of a second and involve minute spatial changes. Until now, work in this field is at a nascent stage, with only a few existing micro-expression databases and methods. In this article, we propose a new micro-expression recognition approach based on the Eulerian motion magnification technique, which could reveal the hidden information and accentuate the subtle changes in micro-expression motion. Validation of our proposal was done on the recently proposed CASME II dataset in comparison with baseline and state-of-the-art methods. We achieve a good recognition accuracy of up to 75.30% by using leave-one-out cross validation evaluation protocol. Extensive experiments on various factors at play further demonstrate the effectiveness of our proposed approach

    Facial Emotion Recognition Using Machine Learning

    Get PDF
    Face detection has been around for ages. Taking a step forward, human emotion displayed by face and felt by brain, captured in either video, electric signal (EEG) or image form can be approximated. Human emotion detection is the need of the hour so that modern artificial intelligent systems can emulate and gauge reactions from face. This can be helpful to make informed decisions be it regarding identification of intent, promotion of offers or security related threats. Recognizing emotions from images or video is a trivial task for human eye, but proves to be very challenging for machines and requires many image processing techniques for feature extraction. Several machine learning algorithms are suitable for this job. Any detection or recognition by machine learning requires training algorithm and then testing them on a suitable dataset. This paper explores a couple of machine learning algorithms as well as feature extraction techniques which would help us in accurate identification of the human emotion

    Distinguishing Posed and Spontaneous Smiles by Facial Dynamics

    Full text link
    Smile is one of the key elements in identifying emotions and present state of mind of an individual. In this work, we propose a cluster of approaches to classify posed and spontaneous smiles using deep convolutional neural network (CNN) face features, local phase quantization (LPQ), dense optical flow and histogram of gradient (HOG). Eulerian Video Magnification (EVM) is used for micro-expression smile amplification along with three normalization procedures for distinguishing posed and spontaneous smiles. Although the deep CNN face model is trained with large number of face images, HOG features outperforms this model for overall face smile classification task. Using EVM to amplify micro-expressions did not have a significant impact on classification accuracy, while the normalizing facial features improved classification accuracy. Unlike many manual or semi-automatic methodologies, our approach aims to automatically classify all smiles into either `spontaneous' or `posed' categories, by using support vector machines (SVM). Experimental results on large UvA-NEMO smile database show promising results as compared to other relevant methods.Comment: 16 pages, 8 figures, ACCV 2016, Second Workshop on Spontaneous Facial Behavior Analysi
    • …
    corecore