229,799 research outputs found

    First report of generalized face processing difficulties in möbius sequence.

    Get PDF
    Reverse simulation models of facial expression recognition suggest that we recognize the emotions of others by running implicit motor programmes responsible for the production of that expression. Previous work has tested this theory by examining facial expression recognition in participants with Möbius sequence, a condition characterized by congenital bilateral facial paralysis. However, a mixed pattern of findings has emerged, and it has not yet been tested whether these individuals can imagine facial expressions, a process also hypothesized to be underpinned by proprioceptive feedback from the face. We investigated this issue by examining expression recognition and imagery in six participants with Möbius sequence, and also carried out tests assessing facial identity and object recognition, as well as basic visual processing. While five of the six participants presented with expression recognition impairments, only one was impaired at the imagery of facial expressions. Further, five participants presented with other difficulties in the recognition of facial identity or objects, or in lower-level visual processing. We discuss the implications of our findings for the reverse simulation model, and suggest that facial identity recognition impairments may be more severe in the condition than has previously been noted

    Inversion improves the recognition of facial expression in thatcherized images

    Get PDF
    The Thatcher illusion provides a compelling example of the face inversion effect. However, the marked effect of inversion in the Thatcher illusion contrasts to other studies that report only a small effect of inversion on the recognition of facial expressions. To address this discrepancy, we compared the effects of inversion and thatcherization on the recognition of facial expressions. We found that inversion of normal faces caused only a small reduction in the recognition of facial expressions. In contrast, local inversion of facial features in upright thatcherized faces resulted in a much larger reduction in the recognition of facial expressions. Paradoxically, inversion of thatcherized faces caused a relative increase in the recognition of facial expressions. Together, these results suggest that different processes explain the effects of inversion on the recognition of facial expressions and on the perception of the Thatcher illusion. The grotesque perception of thatcherized images is based on a more orientation-sensitive representation of the face. In contrast, the recognition of facial expression is dependent on a more orientation-insensitive representation. A similar pattern of results was evident when only the mouth or eye region was visible. These findings demonstrate that a key component of the Thatcher illusion is to be found in orientation-specific encoding of the features of the face

    Superior Facial Expression, But Not Identity Recognition, in Mirror-Touch Synesthesia

    Get PDF
    Simulation models of expression recognition contend that to understand another's facial expressions, individuals map the perceived expression onto the same sensorimotor representations that are active during the experience of the perceived emotion. To investigate this view, the present study examines facial expression and identity recognition abilities in a rare group of participants who show facilitated sensorimotor simulation (mirror-touch synesthetes). Mirror-touch synesthetes experience touch on their own body when observing touch to another person. These experiences have been linked to heightened sensorimotor simulation in the shared-touch network (brain regions active during the passive observation and experience of touch). Mirror-touch synesthetes outperformed nonsynesthetic participants on measures of facial expression recognition, but not on control measures of face memory or facial identity perception. These findings imply a role for sensorimotor simulation processes in the recognition of facial affect, but not facial identity

    Facial Expression Recognition

    Get PDF

    Facial emotion recognition using min-max similarity classifier

    Full text link
    Recognition of human emotions from the imaging templates is useful in a wide variety of human-computer interaction and intelligent systems applications. However, the automatic recognition of facial expressions using image template matching techniques suffer from the natural variability with facial features and recording conditions. In spite of the progress achieved in facial emotion recognition in recent years, the effective and computationally simple feature selection and classification technique for emotion recognition is still an open problem. In this paper, we propose an efficient and straightforward facial emotion recognition algorithm to reduce the problem of inter-class pixel mismatch during classification. The proposed method includes the application of pixel normalization to remove intensity offsets followed-up with a Min-Max metric in a nearest neighbor classifier that is capable of suppressing feature outliers. The results indicate an improvement of recognition performance from 92.85% to 98.57% for the proposed Min-Max classification method when tested on JAFFE database. The proposed emotion recognition technique outperforms the existing template matching methods

    Regression-based Multi-View Facial Expression Recognition

    Get PDF
    We present a regression-based scheme for multi-view facial expression recognition based on 2-D geometric features. We address the problem by mapping facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the expressions can be performed using a state-of-the-art facial expression recognition method. To learn the mapping functions we investigate four regression models: Linear Regression (LR), Support Vector Regression (SVR), Relevance Vector Regression (RVR) and Gaussian Process Regression (GPR). Our extensive experiments on the CMU Multi-PIE facial expression database show that the proposed scheme outperforms view-specific classifiers by utilizing considerably less training data
    • …
    corecore