4 research outputs found

    Reconstructing emotions in motion in prosopagnosia reveals separate representations for identity and emotion

    Full text link
    The human face transmits a wealth of visual signals that readily provide crucial information for social interactions, such as identity and emotional expressions. Yet, a fundamental question remains unresolved: does the face information of identity and emotional expression categorization tap into single or separate representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia, whom uses the suboptimal mouth to process identity and is impaired in categorizing many expressions of the (static) Ekman faces. We used a generative grammar of the Facial Action Coding System coupled with a reverse correlation technique to model 3D dynamic mental representations of the six basic facial expressions of emotion, in PS and healthy observers. Surprisingly, PS’s dynamic mental models of facial expressions were comparable to the controls. Subsequent verification tasks revealed that PS accurately categorized her own and average dynamic facial expression models, but not the very same static exemplars. Evidence that PS reconstructed dynamic facial expressions by using all facial features demonstrates that the face-system relies on distinct representations for identity and emotion, flexibly adapting to categorization constraints. Our data also questions evidence of deficits obtained from patients using static images, and offer novel routes for patient rehabilitation

    Testing, explaining, and exploring models of facial expressions of emotions

    Get PDF
    Models are the hallmark of mature scientific inquiry. In psychology, this maturity has been reached in a pervasive question-what models best represent facial expressions of emotion? Several hypotheses propose different combinations of facial movements [action units (AUs)] as best representing the six basic emotions and four conversational signals across cultures. We developed a new framework to formalize such hypotheses as predictive models, compare their ability to predict human emotion categorizations in Western and East Asian cultures, explain the causal role of individual AUs, and explore updated, culture-accented models that improve performance by reducing a prevalent Western bias. Our predictive models also provide a noise ceiling to inform the explanatory power and limitations of different factors (e.g., AUs and individual differences). Thus, our framework provides a new approach to test models of social signals, explain their predictive power, and explore their optimization, with direct implications for theory development

    Testing, explaining, and exploring models of facial expressions of emotions

    Full text link
    Models are the hallmark of mature scientific inquiry. In psychology, this maturity has been reached in a pervasive question-what models best represent facial expressions of emotion? Several hypotheses propose different combinations of facial movements [action units (AUs)] as best representing the six basic emotions and four conversational signals across cultures. We developed a new framework to formalize such hypotheses as predictive models, compare their ability to predict human emotion categorizations in Western and East Asian cultures, explain the causal role of individual AUs, and explore updated, culture-accented models that improve performance by reducing a prevalent Western bias. Our predictive models also provide a noise ceiling to inform the explanatory power and limitations of different factors (e.g., AUs and individual differences). Thus, our framework provides a new approach to test models of social signals, explain their predictive power, and explore their optimization, with direct implications for theory development
    corecore