129,831 research outputs found
EMPATH: A Neural Network that Categorizes Facial Expressions
There are two competing theories of facial expression recognition. Some researchers have suggested that it is an example of "categorical perception." In this view, expression categories are considered to be discrete entities with sharp boundaries, and discrimination of nearby pairs of expressive faces is enhanced near those boundaries. Other researchers, however, suggest that facial expression perception is more graded and that facial expressions are best thought of as points in a continuous, low-dimensional space, where, for instance, "surprise" expressions lie between "happiness" and "fear" expressions due to their perceptual similarity. In this article, we show that a simple yet biologically plausible neural network model, trained to classify facial expressions into six basic emotions, predicts data used to support both of these theories. Without any parameter tuning, the model matches a variety of psychological data on categorization, similarity, reaction times, discrimination, and recognition difficulty, both qualitatively and quantitatively. We thus explain many of the seemingly complex psychological phenomena related to facial expression perception as natural consequences of the tasks' implementations in the brain
A practice-led approach to facial animation research
In facial expression research, it is well established that certain emotional expressions are universally recognized. Studies into the observer perception of dynamic expressions have built upon this research by highlighting the importance of particular facial regions, timings, and temporal configurations to perception and interpretation. In many studies, the stimuli for such studies have been generated through posing by non-experts or performances by trained actors. However, skilled character animators are capable of crafting recognizable, believable emotional facial expressions as a part of their professional practice. ‘Emotional Avatars’ was conceived as an interdisciplinary research project which would draw upon the knowledge of animation practice and emotional psychology. The aim of the project was to jointly investigate the artistic generation and observer perception of emotional expression animation to determine whether the nuances of emotional facial expression could be artistically choreographed to enhance audience interpretation
Superior Facial Expression, But Not Identity Recognition, in Mirror-Touch Synesthesia
Simulation models of expression recognition contend that to understand another's facial expressions, individuals map the perceived expression onto the same sensorimotor representations that are active during the experience of the perceived emotion. To investigate this view, the present study examines facial expression and identity recognition abilities in a rare group of participants who show facilitated sensorimotor simulation (mirror-touch synesthetes). Mirror-touch synesthetes experience touch on their own body when observing touch to another person. These experiences have been linked to heightened sensorimotor simulation in the shared-touch network (brain regions active during the passive observation and experience of touch). Mirror-touch synesthetes outperformed nonsynesthetic participants on measures of facial expression recognition, but not on control measures of face memory or facial identity perception. These findings imply a role for sensorimotor simulation processes in the recognition of facial affect, but not facial identity
LECTURER’S FACIAL EXPRESSION IN EFL CLASSROOM AT THE UNIVERSITY OF COKROAMINOTO PALOPO
The aim of this study was to find out type of facial expression commonly transpire in the EFL speaking class, to find out students’ perception on lecturer’s facial expression in EFL speaking class and also to find out the impact of lecturer facial expression on students speaking performance. There were two teachers became the subject of this study which were taken by means of purposive sampling. The data were obtained by means of classroom observation, semi-structured interview, and students’ speaking achievement. Classroom observation was conducted in order to get lecturer facial expression frequently appear in the teaching proccess. Moreover, semi-structured interview was used to get students perception toward their lecturer’s facial expression and students English score were collected to get the impact of that facial expression towards the students’ English achievement. Based on the observation result that they types of teacher facial expression commonly transpired were facial expression of happy and facial expression of surprise. These two types became lecturer’s way in attracting students attention. Moreover, another finding was that the students perception that consists of two; positive and negative perception in which majority students stated that their teacher’s expression of happy indirectly affect their spirit and self-confidence. However, when lecturer displayed certain face of expression like as angry, it will influence students emotional in speaking. Furthermore, mjority students displayed good achievement in their speaking performance which were categorized into good speaking performance (GSP).
Keywords: nonverbal communication, facial expression, English lecturer, speaking, students’ perceptio
Creative approaches to emotional expression animation
In facial expression research, it is well established that certain emotional expressions are universally recognized. Studies into observer perception of expressions have built upon this research by highlighting the importance of particular facial regions, actions, and movements to the recognition of emotions. In many studies, the stimuli for such studies have been generated through posing by non-experts or performances by trained actors. However, character animators are required to craft recognizable, believable emotional facial expressions as a part of their profession. In this poster, the authors discuss some of the creative processes employed in their research into emotional expressions, and how practice-led research into expression animation might offer a new perspective on the generation of believable emotional expressions
Inversion improves the recognition of facial expression in thatcherized images
The Thatcher illusion provides a compelling example of the face inversion effect. However, the marked effect of inversion in the Thatcher illusion contrasts to other studies that report only a small effect of inversion on the recognition of facial expressions. To address this discrepancy, we compared the effects of inversion and thatcherization on the recognition of facial expressions. We found that inversion of normal faces caused only a small reduction in the recognition of facial expressions. In contrast, local inversion of facial features in upright thatcherized faces resulted in a much larger reduction in the recognition of facial expressions. Paradoxically, inversion of thatcherized faces caused a relative increase in the recognition of facial expressions. Together, these results suggest that different processes explain the effects of inversion on the recognition of facial expressions and on the perception of the Thatcher illusion. The grotesque perception of thatcherized images is based on a more orientation-sensitive representation of the face. In contrast, the recognition of facial expression is dependent on a more orientation-insensitive representation. A similar pattern of results was evident when only the mouth or eye region was visible. These findings demonstrate that a key component of the Thatcher illusion is to be found in orientation-specific encoding of the features of the face
Negative emotionality influences the effects of emotion on time perception
In this study I used a temporal bisection task to test if greater overestimation of time due to negative emotion is moderated by individual differences in negative emotionality. The effects of fearful facial expressions on time perception were also examined. After a training phase, participants estimated the duration of facial expressions (anger, happiness, fearfulness) and a neutral-baseline facial expression. In accordance to the operation of an arousal-based process, the duration of angry expressions was consistently overestimated relative to other expressions and the baseline condition. In support of a role for individual differences in negative emotionality on time perception, temporal bias due to angry and fearful expressions was positively correlated to individual differences in self-reported negative emotionality. The results are discussed in relation both to the literature on attentional bias to facial expressions in anxiety and fearfulness and also, to the hypothesis that angry expressions evoke a fear-specific response. © 2008 American Psychological Association
Impact of Music on Categorical Facial Perceptions
This paper explores the effect music has on inducing an emotion that impacts a person’s perception of facial expressions through two experiments. Previous research suggests that music has a significant impact on a person’s mood and a vast amount of research has been conducted analyzing facial perception. Extending previous literature, this study will investigate how the impact music has on a person’s mood can affect the way a person perceives the facial expression of another. Experiment 1 uses the anchor effect to highlight the ability music has to anchor a person’s mood powerfully enough to influence that person’s perception of a given facial expression. Experiment 1 will use 96 Caucasian college students to test the independent variable of music to measure the effect it has to alter the subject’s perception of facial expressions on a facial continuum. Each of the three subject groups (happy-music, sad-music, no-music) will complete a categorical facial perception task. The results will be consistent with previous literature and show that both happy and sad music impact the subject’s perception of facial expressions relative to the no-music group. Experiment 2 analyzes the influence that the other-race effect has on impacting the results demonstrated in Experiment 1. Experiment 2 will use 192 Caucasian college students to test the independent variables of music and race of faces perceived (own-race, other-race) to measure the effect they have on altering the subject’s perception of facial expressions on a facial continuum. The results will validate Experiment 1 by showing that both happy and sad music impact the subject’s perception of facial expressions relative to the no-music group regardless of whether the perceived faces were own-race or other-race. The results will also show no statistically significant interaction between the music condition and the race of face perceived condition, however slight increases in the impact of music were observed
Differences in holistic processing do not explain cultural differences in the recognition of facial expression
The aim of this study was to investigate the causes of the own-race advantage in facial expression perception. In Experiment 1, we investigated Western Caucasian and Chinese participants’ perception and categorization of facial expressions of six basic emotions that included two pairs of confusable expressions (fear and surprise; anger and disgust). People were slightly better at identifying facial expressions posed by own-race members (mainly in anger and disgust). In Experiment 2, we asked whether the own-race advantage was due to differences in the holistic processing of facial expressions. Participants viewed composite faces in which the upper part of one expression was combined with the lower part of a different expression. The upper and lower parts of the composite faces were either aligned or misaligned. Both Chinese and Caucasian participants were better at identifying the facial expressions from the misaligned images, showing interference on recognizing the parts of the expressions created by holistic perception of the aligned composite images. However, this interference from holistic processing was equivalent across expressions of own-race and other-race faces in both groups of participants. Whilst the own-race advantage in recognizing facial expressions does seem to reflect the confusability of certain emotions, it cannot be explained by differences in holistic processing. </jats:p
- …