35 research outputs found

    The perception of dynamic and static facial expressions of happiness and disgust investigated by ERPs and fMRI constrained source analysis

    Get PDF
    A recent functional magnetic resonance imaging (fMRI) study by our group demonstrated that dynamic emotional faces are more accurately recognized and evoked more widespread patterns of hemodynamic brain responses than static emotional faces. Based on this experimental design, the present study aimed at investigating the spatio-temporal processing of static and dynamic emotional facial expressions in 19 healthy women by means of multi-channel electroencephalography (EEG), event-related potentials (ERP) and fMRI-constrained regional source analyses. ERP analysis showed an increased amplitude of the LPP (late posterior positivity) over centro-parietal regions for static facial expressions of disgust compared to neutral faces. In addition, the LPP was more widespread and temporally prolonged for dynamic compared to static faces of disgust and happiness. fMRI constrained source analysis on static emotional face stimuli indicated the spatio-temporal modulation of predominantly posterior regional brain activation related to the visual processing stream for both emotional valences when compared to the neutral condition in the fusiform gyrus. The spatio-temporal processing of dynamic stimuli yielded enhanced source activity for emotional compared to neutral conditions in temporal (e.g., fusiform gyrus), and frontal regions (e.g., ventromedial prefrontal cortex, medial and inferior frontal cortex) in early and again in later time windows. The present data support the view that dynamic facial displays trigger more information reflected in complex neural networks, in particular because of their changing features potentially triggering sustained activation related to a continuing evaluation of those faces. A combined fMRI and EEG approach thus provides an advanced insight to the spatio-temporal characteristics of emotional face processing, by also revealing additional neural generators, not identifiable by the only use of an fMRI approach

    Independence of face identity and expression processing: exploring the role of motion

    Get PDF
    According to the classic Bruce and Young (1986) model of face recognition, identity and emotional expression information from the face are processed in parallel and independently. Since this functional model was published, a growing body of research has challenged this viewpoint and instead support an interdependence view. In addition, neural models of face processing (Haxby, Hoffman & Gobbini, 2000) emphasise differences in terms of the processing of changeable and invariant aspects of faces. This article provides a critical appraisal of this literature and discusses the role of motion in both expression and identity recognition and the intertwined nature of identity, expression and motion processing. We conclude, by discussing recent advancements in this area and research questions that still need to be addressed

    A sensorimotor control framework for understanding emotional communication and regulation

    Get PDF
    JHGW and CFH are supported by the Northwood Trust. TEVR was supported by a National Health and Medical Research Council (NHMRC) Early Career Fellowship (1088785). RP and MW were supported by the the Australian Research Council (ARC) Centre of Excellence for Cognition and its Disorders (CE110001021)Peer reviewedPublisher PD

    The perception of dynamic and static facial expressions of happiness and disgust investigated by ERPs and fMRI constrained source analysis.

    Get PDF
    A recent functional magnetic resonance imaging (fMRI) study by our group demonstrated that dynamic emotional faces are more accurately recognized and evoked more widespread patterns of hemodynamic brain responses than static emotional faces. Based on this experimental design, the present study aimed at investigating the spatio-temporal processing of static and dynamic emotional facial expressions in 19 healthy women by means of multi-channel electroencephalography (EEG), event-related potentials (ERP) and fMRI-constrained regional source analyses. ERP analysis showed an increased amplitude of the LPP (late posterior positivity) over centro-parietal regions for static facial expressions of disgust compared to neutral faces. In addition, the LPP was more widespread and temporally prolonged for dynamic compared to static faces of disgust and happiness. fMRI constrained source analysis on static emotional face stimuli indicated the spatio-temporal modulation of predominantly posterior regional brain activation related to the visual processing stream for both emotional valences when compared to the neutral condition in the fusiform gyrus. The spatio-temporal processing of dynamic stimuli yielded enhanced source activity for emotional compared to neutral conditions in temporal (e.g., fusiform gyrus), and frontal regions (e.g., ventromedial prefrontal cortex, medial and inferior frontal cortex) in early and again in later time windows. The present data support the view that dynamic facial displays trigger more information reflected in complex neural networks, in particular because of their changing features potentially triggering sustained activation related to a continuing evaluation of those faces. A combined fMRI and EEG approach thus provides an advanced insight to the spatio-temporal characteristics of emotional face processing, by also revealing additional neural generators, not identifiable by the only use of an fMRI approach

    The perception of dynamic and static facial expressions of happiness and disgust investigated by ERPs and fMRI constrained source analysis

    No full text
    A recent functional magnetic resonance imaging (fMRI) study by our group demonstrated that dynamic emotional faces are more accurately recognized and evoked more widespread patterns of hemodynamic brain responses than static emotional faces. Based on this experimental design, the present study aimed at investigating the spatio-temporal processing of static and dynamic emotional facial expressions in 19 healthy women by means of multi-channel electroencephalography (EEG), event-related potentials (ERP) and fMRI-constrained regional source analyses. ERP analysis showed an increased amplitude of the LPP (late posterior positivity) over centro-parietal regions for static facial expressions of disgust compared to neutral faces. In addition, the LPP was more widespread and temporally prolonged for dynamic compared to static faces of disgust and happiness. fMRI constrained source analysis on static emotional face stimuli indicated the spatio-temporal modulation of predominantly posterior regional brain activation related to the visual processing stream for both emotional valences when compared to the neutral condition in the fusiform gyrus. The spatio-temporal processing of dynamic stimuli yielded enhanced source activity for emotional compared to neutral conditions in temporal (e.g., fusiform gyrus), and frontal regions (e.g., ventromedial prefrontal cortex, medial and inferior frontal cortex) in early and again in later time windows. The present data support the view that dynamic facial displays trigger more information reflected in complex neural networks, in particular because of their changing features potentially triggering sustained activation related to a continuing evaluation of those faces. A combined fMRI and EEG approach thus provides an advanced insight to the spatio-temporal characteristics of emotional face processing, by also revealing additional neural generators, not identifiable by the only use of an fMRI approach

    Experimental design.

    No full text
    <p>One exemplary trial is depicted including presentation durations in ms of a dynamic and static happy facial expression indicated by a black and grey arrow, respectively. Please note that the subject of the photograph has given written informed consent, as outlined in the PLOS consent form, to publication of her photograph.</p

    Post-hoc comparisons of ERPs for static facial expressions for N170, EPN and LPP.

    No full text
    <p>Table displays significant post-hoc comparisons (paired t-tests) for static facial expressions including the time window (ms), ERP-component (N170, EPN, LPP), the kind of comparison (neu = neutral, hap = happy, dis = disgusted), electrode site, T-values, degrees of freedom (df), and p-values (p-values highligted in bold remain significant after FDR correction). Grey and italic font represents additional electrodes of interests (see <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0066997#s2" target="_blank">methods</a> for further explanations).</p

    Source waveforms of significantly different RS-activations.

    No full text
    <p>Source waveforms (root mean square (RMS) curve of each regional source) are displayed over time (0-1000ms) for each condition (solid line  =  neutrality (NEU), dashed line  =  disgust (DIS), dotted line =  happiness (HAP). Significant RS-activations are based on ANOVAs (light grey: significant at p<.05, dark grey: trend to significance at p<.1) and post hoc comparisons (box: no frame: p<.05, dashed frame: p<.1) for static (A) DIS > NEU, (B) HAP > NEU and dynamic facial expressions (C) DIS > NEU (D) HAP > NEU. Abbreviations: le = left, r = right, CUN  =  cuneus, FUG  =  fusiform gyrus, IFG  =  inferior frontal gyrus, MFG  =  medial frontal gyrus, SFG  =  superior frontal gyrus, TUB  =  tuber, vmPFC  =  ventromedial prefrontal cortex (medial frontal gyrus).</p

    The extraction of RS locations by applying the nearest neighbor method for the dynamic stimulus modality.

    No full text
    <p>Talairach coordinates (x,y,z [in millimeters]) of significant fMRI activation foci and of the resulting pooled regional sources (RS) for dynamic stimuli are presented. One additional RS (RS 12) was seeded for the dynamic source model. The lower part <i>(italic)</i> displays excluded brain areas due to eccentricity (ecc) values of ecc<.55. RS = regional sources, L = left; R = right.</p
    corecore