19 research outputs found

    Facial expression (mood) recognition from facial images using committee neural networks

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Facial expressions are important in facilitating human communication and interactions. Also, they are used as an important tool in behavioural studies and in medical rehabilitation. Facial image based mood detection techniques may provide a fast and practical approach for non-invasive mood detection. The purpose of the present study was to develop an intelligent system for facial image based expression classification using committee neural networks.</p> <p>Methods</p> <p>Several facial parameters were extracted from a facial image and were used to train several generalized and specialized neural networks. Based on initial testing, the best performing generalized and specialized neural networks were recruited into decision making committees which formed an integrated committee neural network system. The integrated committee neural network system was then evaluated using data obtained from subjects not used in training or in initial testing.</p> <p>Results and conclusion</p> <p>The system correctly identified the correct facial expression in 255 of the 282 images (90.43% of the cases), from 62 subjects not used in training or in initial testing. Committee neural networks offer a potential tool for image based mood detection.</p

    Emotion Profiling: Ingredient for Rule based Emotion Recognition Engine

    Get PDF
    Emotions are considered to be the reflection of human thinking and decision-making process which increase his/her performance by producing an intelligent outcome. Hence it is a challenging task to embed the emotional intelligence in machine as well so that it could respond appropriately. However, present human computer interfaces still don2019;t fully utilize emotion feedback to create a more natural environment because the performance of the emotion recognition is still not very robust and reliable and far from real life experience. In this paper, we present an attempt in addressing this aspect and identifying the major challenges in the process. We introduce the concept of 2018;emotion profile2019; to evaluate an individual feature as each feature irrespective of the modality has different capability for differentiating among the various subsets of emotions. To capture the discrimination across target emotions w.r.t. each feature we propose a framework for emotion recognition built around if-then rules using certainty factors to represent uncertainty and unreliability of individual features. This technique appears to be simple and effective for these kind of problems

    Reconocimientos de expresiones faciales prototipo usando ica.

    Get PDF
    En este documento se plantea una metodología con el fin de reconocer expresiones faciales prototipo, es decir aquellas asociadas a emociones universales. Esta metodología está compuesta por tres etapas: segmentación del rostro utilizando filtros Haar y clasificadores en cascada, extracción de características basada en el análisis de componentes independientes (ICA) y clasificación de las expresiones faciales utilizando el clasificador del vecino más cercano (KNN). Particularmente se reconocerán cuatro emociones: tristeza, alegría, miedo y enojo más rostros neutrales. La validación de la metodología se realizó sobre secuencias de imágenes de la base de datos FEEDTUM, alcanzando un desempeño promedio de 98.72% de exactitud para el reconocimiento de cinco clases

    Discriminant Subspace Analysis for Uncertain Situation in Facial Recognition

    Get PDF
    Facial analysis and recognition have received substential attention from researchers in biometrics, pattern recognition, and computer vision communities. They have a large number of applications, such as security, communication, and entertainment. Although a great deal of efforts has been devoted to automated face recognition systems, it still remains a challenging uncertainty problem. This is because human facial appearance has potentially of very large intra-subject variations of head pose, illumination, facial expression, occlusion due to other objects or accessories, facial hair and aging. These misleading variations may cause classifiers to degrade generalization performance

    Facial Expression Recognition System

    Get PDF
    This thesis describes the problem of facial expression recognition in the field of computer vision. Firstly, the psychological background of a problem is presented. Then, the idea of facial expression recognition system (FERS) is outlined and the requirements of such system are specified. The FER system consists of 3 stages: face detection, feature extraction and expression recognition. Methods proposed in literature are reviewed for each stage of a system. Finally, the design and implementation of my system are explained. The face detection algorithm used in the system is based on work by Viola and Jones [13]. The expressions are described by appearance features obtained from texture encoded with Local Binary Patterns [32]. The Support Vector Machine with RBF kernel function is used for classification. Databases that were used are: The Facial Expressions and Emotion Database [34], which contains spontaneous emotions and Cohn- Kanade Database [35] with posed emotions. The system was trained on two databases separately and achieves accuracy of 71% for spontaneous emotions recognition and 77% for posed actions recognition

    General and Interval Type-2 Fuzzy Face-Space Approach to Emotion Recognition

    Get PDF
    Facial expressions of a person representing similar emotion are not always unique. Naturally, the facial features of a subject taken from different instances of the same emotion have wide variations. In the presence of two or more facial features, the variation of the attributes together makes the emotion recognition problem more complicated. This variation is the main source of uncertainty in the emotion recognition problem, which has been addressed here in two steps using type-2 fuzzy sets. First a type-2 fuzzy face space is constructed with the background knowledge of facial features of different subjects for different emotions. Second, the emotion of an unknown facial expression is determined based on the consensus of the measured facial features with the fuzzy face space. Both interval and general type-2 fuzzy sets (GT2FS) have been used separately to model the fuzzy face space. The interval type-2 fuzzy set (IT2FS) involves primary membership functions for m facial features obtained from n-subjects, each having l-instances of facial expressions for a given emotion. The GT2FS in addition to employing the primary membership functions mentioned above also involves the secondary memberships for individual primary membership curve, which has been obtained here by formulating and solving an optimization problem. The optimization problem here attempts to minimize the difference between two decoded signals: the first one being the type-1 defuzzification of the average primary membership functions obtained from the n-subjects, while the second one refers to the type-2 defuzzified signal for a given primary membership function with secondary memberships as unknown. The uncertainty management policy adopted using GT2FS has resulted in a classification accuracy of 98.333% in comparison to 91.667% obtained by its interval type-2 counterpart. A small improvement (approximately 2.5%) in classification accuracy by IT2FS has been attained by pre-processing measurements using - he well-known interval approach

    Biometric Classification with Factor Analysis

    Get PDF
    This research presents a study on biometrics classification using Factor Analysis (FA). As a multivariate statistical tool, factor analysis is useful for understanding the underlying structure in a dataset. Moreover, in addition to achieving an economy of the variables, the"factors" or hypothetical constructs can provide an alternate yet succinct representation of the data. It is a method of determining, from an observable set of variables, a basic set of components that are common to all the observations. In this study, the loadings (orweights) on the Factors are used to classify the data in alternate representation. In particular, we will examine and group the data according to three biometric features. In the first part, we demonstrate the capabilities of factor analysis to capture the gender of the individual. This will enable us to use FA as a gender classifier. The next study will show the use of an FA as a facial hair classifier. Given a group of individuals, we will be able to classify them as either having beards or not. Finally, in the last part presented inthis work, we will work on classifying the facial expressions of a group of Japanese women. Given all seven universal expressions per subject (two or three of each expression), we will use factor analysis to group each subject according to their expression. Furthermore, given an individual with a particular expression, we will use factor analysis as a biometric measure in the determination of the particular expression exhibited
    corecore