35 research outputs found

    Automatic Detection of Pain from Spontaneous Facial Expressions

    Get PDF
    This paper presents a new approach for detecting pain in sequences of spontaneous facial expressions. The motivation for this work is to accompany mobile-based self-management of chronic pain as a virtual sensor for tracking patients' expressions in real-world settings. Operating under such constraints requires a resource efficient approach for processing non-posed facial expressions from unprocessed temporal data. In this work, the facial action units of pain are modeled as sets of distances among related facial landmarks. Using standardized measurements of pain versus no-pain that are specific to each user, changes in the extracted features in relation to pain are detected. The activated features in each frame are combined using an adapted form of the Prkachin and Solomon Pain Intensity scale (PSPI) to detect the presence of pain per frame. Painful features must be activated in N consequent frames (time window) to indicate the presence of pain in a session. The discussed method was tested on 171 video sessions for 19 subjects from the McMaster painful dataset for spontaneous facial expressions. The results show higher precision than coverage in detecting sequences of pain. Our algorithm achieves 94% precision (F-score=0.82) against human observed labels, 74% precision (F-score=0.62) against automatically generated pain intensities and 100% precision (F-score=0.67) against self-reported pain intensities

    Facial Emotion Recognition Based on Empirical Mode Decomposition and Discrete Wavelet Transform Analysis

    Get PDF
    This paper presents a new framework of using empirical mode decomposition (EMD) and discrete wavelet transform (DWT) with an application for facial emotion recognition. EMD is a multi-resolution technique used to decompose any complicated signal into a small set of intrinsic mode functions (IMFs) based on sifting process. In this framework, the EMD was applied on facial images to extract the informative features by decomposing the image into a set of IMFs and residue. The selected IMFs was then subjected to DWT in which it decomposes the instantaneous frequency of the IMFs into four sub band. The approximate coefficients (cA1) at first level decomposition are extracted and used as significant features to recognize the facial emotion. Since there are a large number of coefficients, hence the principal component analysis (PCA) is applied to the extracted features. The k-nearest neighbor classifier is adopted as a classifier to classify seven facial emotions (anger, disgust, fear, happiness, neutral, sadness and surprise). To evaluate the effectiveness of the proposed method, the JAFFE database has been employed. Based on the results obtained, the proposed method demonstrates the recognition rate of 80.28%, thus it is converging

    Adaptive jukebox : a context-sensitive playlist generator

    Get PDF
    Nowadays, a lot of users own large collections of music MP3 files. Manually organising such collections into playlists is a tedious task. On the other hand random playlist generation may not always provide the user with an enjoyable experience. Automatic playlist generation is a relatively new field in computer science that address this issue, developing algorithms that can automatically create playlists to suit the user’s preferences. This paper presents our work in this field, where we suggest that playlist generators should be more context-sensitive. We also present Adaptive Jukebox, a context-sensitive, zero-input playlist generator that recommends and plays songs from the user’s personal MP3 collection. Initial experiments suggest that our system is more accurate than both a random generator and a system that does not take context into account.peer-reviewe

    Recognising facial expressions in video sequences

    Full text link
    We introduce a system that processes a sequence of images of a front-facing human face and recognises a set of facial expressions. We use an efficient appearance-based face tracker to locate the face in the image sequence and estimate the deformation of its non-rigid components. The tracker works in real-time. It is robust to strong illumination changes and factors out changes in appearance caused by illumination from changes due to face deformation. We adopt a model-based approach for facial expression recognition. In our model, an image of a face is represented by a point in a deformation space. The variability of the classes of images associated to facial expressions are represented by a set of samples which model a low-dimensional manifold in the space of deformations. We introduce a probabilistic procedure based on a nearest-neighbour approach to combine the information provided by the incoming image sequence with the prior information stored in the expression manifold in order to compute a posterior probability associated to a facial expression. In the experiments conducted we show that this system is able to work in an unconstrained environment with strong changes in illumination and face location. It achieves an 89\% recognition rate in a set of 333 sequences from the Cohn-Kanade data base

    Discriminant Subspace Analysis for Uncertain Situation in Facial Recognition

    Get PDF
    Facial analysis and recognition have received substential attention from researchers in biometrics, pattern recognition, and computer vision communities. They have a large number of applications, such as security, communication, and entertainment. Although a great deal of efforts has been devoted to automated face recognition systems, it still remains a challenging uncertainty problem. This is because human facial appearance has potentially of very large intra-subject variations of head pose, illumination, facial expression, occlusion due to other objects or accessories, facial hair and aging. These misleading variations may cause classifiers to degrade generalization performance

    Feed Forward Neural Network – Facial Expression Recognition Using 2D Image Texture

    Get PDF
    Facial Expression Recognition (FER) is a very active field of study in a wide range of fields such as computer vision, human emotional analyses، pattern recognition and AI. FER has received extensive awareness because it can be employed in human computer interaction (HCI), human emotional analyses, interactive video, image indexing and retrieval. Human facial expression Recognition is one of the most powerful and difficult responsibilities of social communication. Face expressions are, in general terms, natural and direct methods of communicating emotions and intentions for human beings. GWT is applied as a preprocess stage. For the classification of face expressions, this study employs the well-known Feed Forward Propagating Algorithm to create and train a neural network

    Facial Expression Recognition Based on Radon and Discrete Wavelet Transform using Support Vector Machines

    Get PDF
    Extracting facial features remains a difficult task because of unpredictable of facial features largely due to variations in pixel intensities and subtle changes of facial features. The Radon transform inherits rotational and translational properties that are capable of preserving pixel intensities variations and also is used to derive the directional features. Thus, this paper presents a new pattern framework for facial expression recognition based on Radon and wavelet transform using Support Vector Machines classifier to recognize the seven facial emotions. Firstly, the pre-processed facial images are projected into Radon space via Radon transform at a specified angle. Then, the obtained Radon space or sinogram that represent the facial emotions is subjected to wavelet transform. In this framework, the Radon space is decomposed into four sub-band at a different level of decomposition. The approximate coefficients sub-band are independently extracted and used as intrinsic features to recognize the facial emotion. To reduce the data dimensionality, principal component analysis (PCA) is applied to the extracted features. Then, the Support Vector Machines (SVM) classifier is adopted as a classifier to classify seven (anger, disgust, fear, happiness, neutral, sadness and surprise) facial emotions. To evaluate the effectiveness of the proposed method, the JAFFE database has been employed. Experimental results show that the proposed method has achieved 93.89% accuracy
    corecore