4 research outputs found

    Facial Mimicry and the Processing of Facial Emotional Expressions

    Get PDF
    In social interactions, facial expressions make a major contribution to our daily communication as they can transmit internal states like motivations and feelings of our conspecifics. In the last decades, research has revealed that facial mimicry plays a pivotal role in the accurate perception and interpretation of facial expressions. Embodied simulation theories claim that facial expressions are automatically mimicked, thereby producing a facial feedback signal, which in turn activates a corresponding state in the motor, somatosensory, affective and reward system of the observer. This activation - in turn - facilitates the processing of the observed emotional expression and hence supports the understanding of its meaning. Research on the influence of facial mimicry on the perception of emotional expressions is, to a large extent, driven by facial mimicry manipulation studies. Especially the classical facial mimicry manipulation method introduced by Strack, Martin, and Stepper (1988) has become a popular and established method. Here participants have to hold a pen in different positions with the mouth inducing a smiling or a frowning expression. The present thesis assessed the influence of facial mimicry on cognitive processes by means of this classical facial mimicry manipulation method. In three projects, I investigated the impact of (1) facial mimicry on the automatic processing of facial emotional expressions, (2) facial mimicry on the working memory for emotional expressions, and (3) facial mimicry manipulation on an impaired processing of emotional expressions in patients with Parkinson’s disease (PD). In a first project, the impact of facial mimicry manipulation was measured by electrophysiological recordings of the expression related mismatch negativity to unattended happy and sad faces. The findings reveal that the automatic processing of facial emotional expressions is systematically influenced by facial mimicry. In the second project, I assessed the behavioral performance during a facial emotional working memory task while the mimicry of participants was manipulated. Findings of this project highlight that working memory for emotional expressions is influenced by facial mimicry. Finally, in the third project, I investigated the link between the reduced facial mimicry in PD patients and their impaired ability to recognize emotional expressions. For this purpose, I compared the data of PD and healthy individuals during the performance of an emotional change detection task while undergoing facial mimicry manipulation. Although healthy participants show a typical pattern of facial mimicry manipulation influence, PD patients do not profit of the applied manipulation. The results of the present thesis demonstrate that facial mimicry is an indispensable part in our daily social interaction as it affects the processing of emotions on a perceptual as well as a cognitive level. I showed that facial mimicry influences the automatic processing of - as well as the working memory for - observed facial emotional expressions. Furthermore, the empirical evidence of the third project suggests that not only facial mimicry is reduced in patients with PD but rather that the whole process of facial feedback processing is impaired in those individuals. These results demonstrate the applicability of the classical facial mimicry manipulation method and further highlight the importance of research on the influence of facial mimicry on cognitive processing as our ability to understand the emotional expressions of our conspecifics and thus our social interaction depends on an intact facial mimicry processing

    A neurocomputational account of self-other distinction: from cell to society

    Get PDF
    Human social systems are unique in the animal kingdom. Social norms, constructed at a higher level of organisation, influence individuals across vast spatiotemporal scales. Characterising the neurocomputational processes that enable the emergence of these social systems could inform holistic models of human cognition and mental illness. Social neuroscience has shown that the processing of ‘social’ information demands many of the same computations as those involved in reasoning about inanimate objects in ‘non-social’ contexts. However, for people to reason about each other’s mental states, the brain must be able to distinguish between one mind and another. This ability, to attribute a mental state to a specific agent, has long been studied by philosophers under the guise of ‘meta-representation’. Empathy research has taken strides in describing the neural correlates of representing another person’s affective or bodily state, as distinct from one’s own. However, Self-Other distinction in beliefs, and hence meta-representation, has not figured in formal models of cognitive neuroscience. Here, I introduce a novel behavioural paradigm, which acts as a computational assay for Self-Other distinction in a cognitive domain. The experiments in this thesis combine computational modelling with magnetoencephalography and functional magnetic resonance imaging to explore how basic units of computation, predictions and prediction errors, are selectively attributed to Self and Other, when subjects have to simulate another agent’s learning process. I find that these low-level learning signals encode information about agent identity. Furthermore, the fidelity of this encoding is susceptible to experience-dependent plasticity, and predicts the presence of subclinical psychopathological traits. The results suggest that the neural signals generating an internal model of the world contain information, not only about ‘what’ is out there, but also about ‘who’ the model belongs to. That this agent-specificity is learnable highlights potential computational failure modes in mental illnesses with an altered sense of Self

    Feature-specific prediction errors for visual mismatch

    No full text
    Predictive coding (PC) theory posits that our brain employs a predictive model of the environment to infer the causes of its sensory inputs. A fundamental but untested prediction of this theory is that the same stimulus should elicit distinct precision weighted prediction errors (pwPEs) when different (feature-specific) predictions are violated, even in the absence of attention. Here, we tested this hypothesis using functional magnetic resonance imaging (fMRI) and a multi-feature roving visual mismatch paradigm where rare changes in either color (red, green), or emotional expression (happy, fearful) of faces elicited pwPE responses in human participants. Using a computational model of learning and inference, we simulated pwPE and prediction trajectories of a Bayes-optimal observer and used these to analyze changes in blood oxygen level dependent (BOLD) responses to changes in color and emotional expression of faces while participants engaged in a distractor task. Controlling for visual attention by eye-tracking, we found pwPE responses to unexpected color changes in the fusiform gyrus. Conversely, unexpected changes of facial emotions elicited pwPE responses in cortico-thalamo-cerebellar structures associated with emotion and theory of mind processing. Predictions pertaining to emotions activated fusiform, occipital and temporal areas. Our results are consistent with a general role of PC across perception, from low-level to complex and socially relevant object features, and suggest that monitoring of the social environment occurs continuously and automatically, even in the absence of attention
    corecore