8,858 research outputs found
On combining the facial movements of a talking head
We present work on Obie, an embodied conversational
agent framework. An embodied conversational agent, or
talking head, consists of three main components. The
graphical part consists of a face model and a facial muscle
model. Besides the graphical part, we have implemented
an emotion model and a mapping from emotions to facial
expressions. The animation part of the framework focuses
on the combination of different facial movements
temporally. In this paper we propose a scheme of
combining facial movements on a 3D talking head
Four not six: revealing culturally common facial expressions of emotion
As a highly social species, humans generate complex facial expressions to communicate a diverse range of emotions. Since Darwinâs work, identifying amongst these complex patterns which are common across cultures and which are culture-specific has remained a central question in psychology, anthropology, philosophy, and more recently machine vision and social robotics. Classic approaches to addressing this question typically tested the cross-cultural recognition of theoretically motivated facial expressions representing six emotions, and reported universality. Yet, variable recognition accuracy across cultures suggests a narrower cross-cultural communication, supported by sets of simpler expressive patterns embedded in more complex facial expressions. We explore this hypothesis by modelling the facial expressions of over 60 emotions across two cultures, and segregating out the latent expressive patterns. Using a multi-disciplinary approach, we first map the conceptual organization of a broad spectrum of emotion words by building semantic networks in two cultures. For each emotion word in each culture, we then model and validate its corresponding dynamic facial expression, producing over 60 culturally valid facial expression models. We then apply to the pooled models a multivariate data reduction technique, revealing four latent and culturally common facial expression patterns that each communicates specific combinations of valence, arousal and dominance. We then reveal the face movements that accentuate each latent expressive pattern to create complex facial expressions. Our data questions the widely held view that six facial expression patterns are universal, instead suggesting four latent expressive patterns with direct implications for emotion communication, social psychology, cognitive neuroscience, and social robotics
Improving Facial Analysis and Performance Driven Animation through Disentangling Identity and Expression
We present techniques for improving performance driven facial animation,
emotion recognition, and facial key-point or landmark prediction using learned
identity invariant representations. Established approaches to these problems
can work well if sufficient examples and labels for a particular identity are
available and factors of variation are highly controlled. However, labeled
examples of facial expressions, emotions and key-points for new individuals are
difficult and costly to obtain. In this paper we improve the ability of
techniques to generalize to new and unseen individuals by explicitly modeling
previously seen variations related to identity and expression. We use a
weakly-supervised approach in which identity labels are used to learn the
different factors of variation linked to identity separately from factors
related to expression. We show how probabilistic modeling of these sources of
variation allows one to learn identity-invariant representations for
expressions which can then be used to identity-normalize various procedures for
facial expression analysis and animation control. We also show how to extend
the widely used techniques of active appearance models and constrained local
models through replacing the underlying point distribution models which are
typically constructed using principal component analysis with
identity-expression factorized representations. We present a wide variety of
experiments in which we consistently improve performance on emotion
recognition, markerless performance-driven facial animation and facial
key-point tracking.Comment: to appear in Image and Vision Computing Journal (IMAVIS
The Actions and Feelings Questionnaire in Autism and Typically Developed Adults
Open access via Springer Compact Agreement We are grateful to Simon Baron-Cohen and Paula Smith of the Cambridge Autism Centre for the use of the ARC database in distributing the questionnaire, to all participants for completing it, to Eilidh Farquar for special efforts in distributing the link and to Gemma Matthews for advice on using AMOS 23. JHGW is supported by the Northwood Trust.Peer reviewedPublisher PD
Affective games:a multimodal classification system
Affective gaming is a relatively new field of research that exploits human emotions to influence gameplay for an enhanced player experience. Changes in playerâs psychology reflect on their behaviour and physiology, hence recognition of such variation is a core element in affective games. Complementary sources of affect offer more reliable recognition, especially in contexts where one modality is partial or unavailable. As a multimodal recognition system, affect-aware games are subject to the practical difficulties met by traditional trained classifiers. In addition, inherited game-related challenges in terms of data collection and performance arise while attempting to sustain an acceptable level of immersion. Most existing scenarios employ sensors that offer limited freedom of movement resulting in less realistic experiences. Recent advances now offer technology that allows players to communicate more freely and naturally with the game, and furthermore, control it without the use of input devices. However, the affective game industry is still in its infancy and definitely needs to catch up with the current life-like level of adaptation provided by graphics and animation
Dynamic Facial Expression of Emotion Made Easy
Facial emotion expression for virtual characters is used in a wide variety of
areas. Often, the primary reason to use emotion expression is not to study
emotion expression generation per se, but to use emotion expression in an
application or research project. What is then needed is an easy to use and
flexible, but also validated mechanism to do so. In this report we present such
a mechanism. It enables developers to build virtual characters with dynamic
affective facial expressions. The mechanism is based on Facial Action Coding.
It is easy to implement, and code is available for download. To show the
validity of the expressions generated with the mechanism we tested the
recognition accuracy for 6 basic emotions (joy, anger, sadness, surprise,
disgust, fear) and 4 blend emotions (enthusiastic, furious, frustrated, and
evil). Additionally we investigated the effect of VC distance (z-coordinate),
the effect of the VC's face morphology (male vs. female), the effect of a
lateral versus a frontal presentation of the expression, and the effect of
intensity of the expression. Participants (n=19, Western and Asian subjects)
rated the intensity of each expression for each condition (within subject
setup) in a non forced choice manner. All of the basic emotions were uniquely
perceived as such. Further, the blends and confusion details of basic emotions
are compatible with findings in psychology
- âŠ