8,319 research outputs found
General and Interval Type-2 Fuzzy Face-Space Approach to Emotion Recognition
Facial expressions of a person representing similar emotion are not always unique. Naturally, the facial features of a subject taken from different instances of the same emotion have wide variations. In the presence of two or more facial features, the variation of the attributes together makes the emotion recognition problem more complicated. This variation is the main source of uncertainty in the emotion recognition problem, which has been addressed here in two steps using type-2 fuzzy sets. First a type-2 fuzzy face space is constructed with the background knowledge of facial features of different subjects for different emotions. Second, the emotion of an unknown facial expression is determined based on the consensus of the measured facial features with the fuzzy face space. Both interval and general type-2 fuzzy sets (GT2FS) have been used separately to model the fuzzy face space. The interval type-2 fuzzy set (IT2FS) involves primary membership functions for m facial features obtained from n-subjects, each having l-instances of facial expressions for a given emotion. The GT2FS in addition to employing the primary membership functions mentioned above also involves the secondary memberships for individual primary membership curve, which has been obtained here by formulating and solving an optimization problem. The optimization problem here attempts to minimize the difference between two decoded signals: the first one being the type-1 defuzzification of the average primary membership functions obtained from the n-subjects, while the second one refers to the type-2 defuzzified signal for a given primary membership function with secondary memberships as unknown. The uncertainty management policy adopted using GT2FS has resulted in a classification accuracy of 98.333% in comparison to 91.667% obtained by its interval type-2 counterpart. A small improvement (approximately 2.5%) in classification accuracy by IT2FS has been attained by pre-processing measurements using - he well-known interval approach
Cultural dialects of real and synthetic emotional facial expressions
In this article we discuss the aspects of designing facial expressions for virtual humans (VHs) with a specific culture. First we explore the notion of cultures and its relevance for applications with a VH. Then we give a general scheme of designing emotional facial expressions, and identify the stages where a human is involved, either as a real person with some specific role, or as a VH displaying facial expressions. We discuss how the display and the emotional meaning of facial expressions may be measured in objective ways, and how the culture of displayers and the judges may influence the process of analyzing human facial expressions and evaluating synthesized ones. We review psychological experiments on cross-cultural perception of emotional facial expressions. By identifying the culturally critical issues of data collection and interpretation with both real and VHs, we aim at providing a methodological reference and inspiration for further research
Optimal set of EEG features for emotional state classification and trajectory visualization in Parkinson's disease
In addition to classic motor signs and symptoms, individuals with Parkinson's disease (PD) are characterized by emotional deficits. Ongoing brain activity can be recorded by electroencephalograph (EEG) to discover the links between emotional states and brain activity. This study utilized machine-learning algorithms to categorize emotional states in PD patients compared with healthy controls (HC) using EEG. Twenty non-demented PD patients and 20 healthy age-, gender-, and education level-matched controls viewed happiness, sadness, fear, anger, surprise, and disgust emotional stimuli while fourteen-channel EEG was being recorded. Multimodal stimulus (combination of audio and visual) was used to evoke the emotions. To classify the EEG-based emotional states and visualize the changes of emotional states over time, this paper compares four kinds of EEG features for emotional state classification and proposes an approach to track the trajectory of emotion changes with manifold learning. From the experimental results using our EEG data set, we found that (a) bispectrum feature is superior to other three kinds of features, namely power spectrum, wavelet packet and nonlinear dynamical analysis; (b) higher frequency bands (alpha, beta and gamma) play a more important role in emotion activities than lower frequency bands (delta and theta) in both groups and; (c) the trajectory of emotion changes can be visualized by reducing subject-independent features with manifold learning. This provides a promising way of implementing visualization of patient's emotional state in real time and leads to a practical system for noninvasive assessment of the emotional impairments associated with neurological disorders
Predictive biometrics: A review and analysis of predicting personal characteristics from biometric data
Interest in the exploitation of soft biometrics information has continued to develop over the last decade or so. In comparison with traditional biometrics, which focuses principally on person identification, the idea of soft biometrics processing is to study the utilisation of more general information regarding a system user, which is not necessarily unique. There are increasing indications that this type of data will have great value in providing complementary information for user authentication. However, the authors have also seen a growing interest in broadening the predictive capabilities of biometric data, encompassing both easily definable characteristics such as subject age and, most recently, `higher level' characteristics such as emotional or mental states. This study will present a selective review of the predictive capabilities, in the widest sense, of biometric data processing, providing an analysis of the key issues still adequately to be addressed if this concept of predictive biometrics is to be fully exploited in the future
Dynamic Facial Expression of Emotion Made Easy
Facial emotion expression for virtual characters is used in a wide variety of
areas. Often, the primary reason to use emotion expression is not to study
emotion expression generation per se, but to use emotion expression in an
application or research project. What is then needed is an easy to use and
flexible, but also validated mechanism to do so. In this report we present such
a mechanism. It enables developers to build virtual characters with dynamic
affective facial expressions. The mechanism is based on Facial Action Coding.
It is easy to implement, and code is available for download. To show the
validity of the expressions generated with the mechanism we tested the
recognition accuracy for 6 basic emotions (joy, anger, sadness, surprise,
disgust, fear) and 4 blend emotions (enthusiastic, furious, frustrated, and
evil). Additionally we investigated the effect of VC distance (z-coordinate),
the effect of the VC's face morphology (male vs. female), the effect of a
lateral versus a frontal presentation of the expression, and the effect of
intensity of the expression. Participants (n=19, Western and Asian subjects)
rated the intensity of each expression for each condition (within subject
setup) in a non forced choice manner. All of the basic emotions were uniquely
perceived as such. Further, the blends and confusion details of basic emotions
are compatible with findings in psychology
Affect and believability in game characters:a review of the use of affective computing in games
Virtual agents are important in many digital environments. Designing a character that highly engages users in terms of interaction is an intricate task constrained by many requirements. One aspect that has gained more attention recently is the effective dimension of the agent. Several studies have addressed the possibility of developing an affect-aware system for a better user experience. Particularly in games, including emotional and social features in NPCs adds depth to the characters, enriches interaction possibilities, and combined with the basic level of competence, creates a more appealing game. Design requirements for emotionally intelligent NPCs differ from general autonomous agents with the main goal being a stronger player-agent relationship as opposed to problem solving and goal assessment. Nevertheless, deploying an affective module into NPCs adds to the complexity of the architecture and constraints. In addition, using such composite NPC in games seems beyond current technology, despite some brave attempts. However, a MARPO-type modular architecture would seem a useful starting point for adding emotions
- …