5 research outputs found

    Facial Expression Recognition Using Diagonal Crisscross Local Binary Pattern

    Get PDF
    Facial expression analysis is a noteworthy and challenging problem in the field of Computer Vision, Human-Computer Interaction and Image Analysis. For accomplishing FER, it is very difficult to acquire an effective facial description of the original facial images. The Local Binary Pattern (LBP) which captures facial attributes locally from the images is broadly used for facial expression recognition. But conventional LBP has some limitations. To overcome the limitations, novel approach for Facial Expression Recognition based Diagonal Crisscross Local Binary Pattern (DCLBP). It is based on the idea that pixel variations in diagonal as well as vertical and horizontal (crisscross) should be taken as an image feature in the neighborhood different from the other conventional approaches.The Chi-square distance method is used to classify various expressions. To enhance the recognition rate and reduce the classification time, weighted mask is employed to label the particular components in the face like eyebrow, mouth and eye with larger weights than the other parts of the face. The results of comparison showed the performance of the suggested approach comparing to the other approaches and the experimental results on the databases JAFFE and CK exhibited the better recognition rate

    Multiple Classifier Systems for the Classification of Audio-Visual Emotional States

    Full text link
    Abstract. Research activities in the field of human-computer inter-action increasingly addressed the aspect of integrating some type of emotional intelligence. Human emotions are expressed through differ-ent modalities such as speech, facial expressions, hand or body gestures, and therefore the classification of human emotions should be considered as a multimodal pattern recognition problem. The aim of our paper is to investigate multiple classifier systems utilizing audio and visual features to classify human emotional states. For that a variety of features have been derived. From the audio signal the fundamental frequency, LPC-and MFCC coefficients, and RASTA-PLP have been used. In addition to that two types of visual features have been computed, namely form and motion features of intermediate complexity. The numerical evaluation has been performed on the four emotional labels Arousal, Expectancy, Power, Valence as defined in the AVEC data set. As classifier architec-tures multiple classifier systems are applied, these have been proven to be accurate and robust against missing and noisy data.

    Dynamic Facial Emotion Recognition Oriented to HCI Applications

    Get PDF
    Producci贸n Cient铆ficaAs part of a multimodal animated interface previously presented in [38], in this paper we describe a method for dynamic recognition of displayed facial emotions on low resolution streaming images. First, we address the detection of Action Units of the Facial Action Coding System upon Active Shape Models and Gabor filters. Normalized outputs of the Action Unit recognition step are then used as inputs for a neural network which is based on real cognitive systems architecture, and consists on a habituation network plus a competitive network. Both the competitive and the habituation layer use differential equations thus taking into account the dynamic information of facial expressions through time. Experimental results carried out on live video sequences and on the Cohn-Kanade face database show that the proposed method provides high recognition hit rates.Junta de Castilla y Le贸n (Programa de apoyo a proyectos de investigaci贸n-Ref. VA036U14)Junta de Castilla y Le贸n (programa de apoyo a proyectos de investigaci贸n - Ref. VA013A12-2)Ministerio de Econom铆a, Industria y Competitividad (Grant DPI2014-56500-R

    Spontaneous Facial Behavior Computing in Human Machine Interaction with Applications in Autism Treatment

    Get PDF
    Digital devices and computing machines such as computers, hand-held devices and robots are becoming an important part of our daily life. To have affect-aware intelligent Human-Machine Interaction (HMI) systems, scientists and engineers have aimed to design interfaces which can emulate face-to-face communication. Such HMI systems are capable of detecting and responding upon users\u27 emotions and affective states. One of the main challenges for producing such intelligent system is to design a machine, which can automatically compute spontaneous behaviors of humans in real-life settings. Since humans\u27 facial behaviors contain important non-verbal cues, this dissertation studies facial actions and behaviors in HMI systems. The main two objectives of this dissertation are: 1- capturing, annotating and computing spontaneous facial expressions in a Human-Computer Interaction (HCI) system and releasing a database that allows researchers to study the dynamics of facial muscle movements in both posed and spontaneous data. 2- developing and deploying a robot-based intervention protocol for autism therapeutic applications and modeling facial behaviors of children with high-functioning autism in a real-world Human-Robot Interaction (HRI) system. Because of the lack of data for analyzing the dynamics of spontaneous facial expressions, my colleagues and I introduced and released a novel database called Denver Intensity of Spontaneous Facial Actions (DISFA) . DISFA describes facial expressions using Facial Action Coding System (FACS) - a gold standard technique which annotates facial muscle movements in terms of a set of defined Action Units (AUs). This dissertation also introduces an automated system for recognizing DISFA\u27s facial expressions and dynamics of AUs in a single image or sequence of facial images. Results illustrate that our automated system is capable of computing AU dynamics with high accuracy (overall reliability ICC = 0.77). In addition, this dissertation investigates and computes the dynamics and temporal patterns of both spontaneous and posed facial actions, which can be used to automatically infer the meaning of facial expressions. Another objective of this dissertation is to analyze and compute facial behaviors (i.e. eye gaze and head orientation) of individuals in real-world HRI system. Due to the fact that children with Autism Spectrum Disorder (ASD) show interest toward technology, we designed and conducted a set of robot-based games to study and foster the socio-behavioral responses of children diagnosed with high-functioning ASD. Computing the gaze direction and head orientation patterns illustrate how individuals with ASD regulate their facial behaviors differently (compared to typically developing children) when interacting with a robot. In addition, studying the behavioral responses of participants during different phases of this study (i.e. baseline, intervention and follow-up) reveals that overall, a robot-based therapy setting can be a viable approach for helping individuals with autism
    corecore