74 research outputs found

    Emotional Cues during Simultaneous Face and Voice Processing: Electrophysiological Insights

    Get PDF
    Both facial expression and tone of voice represent key signals of emotional communication but their brain processing correlates remain unclear. Accordingly, we constructed a novel implicit emotion recognition task consisting of simultaneously presented human faces and voices with neutral, happy, and angry valence, within the context of recognizing monkey faces and voices task. To investigate the temporal unfolding of the processing of affective information from human face-voice pairings, we recorded event-related potentials (ERPs) to these audiovisual test stimuli in 18 normal healthy subjects; N100, P200, N250, P300 components were observed at electrodes in the frontal-central region, while P100, N170, P270 were observed at electrodes in the parietal-occipital region. Results indicated a significant audiovisual stimulus effect on the amplitudes and latencies of components in frontal-central (P200, P300, and N250) but not the parietal occipital region (P100, N170 and P270). Specifically, P200 and P300 amplitudes were more positive for emotional relative to neutral audiovisual stimuli, irrespective of valence, whereas N250 amplitude was more negative for neutral relative to emotional stimuli. No differentiation was observed between angry and happy conditions. The results suggest that the general effect of emotion on audiovisual processing can emerge as early as 200 msec (P200 peak latency) post stimulus onset, in spite of implicit affective processing task demands, and that such effect is mainly distributed in the frontal-central region

    Audiovisual Non-Verbal Dynamic Faces Elicit Converging fMRI and ERP Responses

    Get PDF
    In an everyday social interaction we automatically integrate another’s facial movements and vocalizations, be they linguistic or otherwise. This requires audiovisual integration of a continual barrage of sensory input—a phenomenon previously well-studied with human audiovisual speech, but not with non-verbal vocalizations. Using both fMRI and ERPs, we assessed neural activity to viewing and listening to an animated female face producing non-verbal, human vocalizations (i.e. coughing, sneezing) under audio-only (AUD), visual-only (VIS) and audiovisual (AV) stimulus conditions, alternating with Rest (R). Underadditive effects occurred in regions dominant for sensory processing, which showed AV activation greater than the dominant modality alone. Right posterior temporal and parietal regions showed an AV maximum in which AV activation was greater than either modality alone, but not greater than the sum of the unisensory conditions. Other frontal and parietal regions showed Common-activation in which AV activation was the same as one or both unisensory conditions. ERP data showed an early superadditive effect (AV > AUD + VIS, no rest), mid-range underadditive effects for auditory N140 and face-sensitive N170, and late AV maximum and common-activation effects. Based on convergence between fMRI and ERP data, we propose a mechanism where a multisensory stimulus may be signaled or facilitated as early as 60 ms and facilitated in sensory-specific regions by increasing processing speed (at N170) and efficiency (decreasing amplitude in auditory and face-sensitive cortical activation and ERPs). Finally, higher-order processes are also altered, but in a more complex fashion

    How Bodies and Voices Interact in Early Emotion Perception

    Get PDF
    Successful social communication draws strongly on the correct interpretation of others' body and vocal expressions. Both can provide emotional information and often occur simultaneously. Yet their interplay has hardly been studied. Using electroencephalography, we investigated the temporal development underlying their neural interaction in auditory and visual perception. In particular, we tested whether this interaction qualifies as true integration following multisensory integration principles such as inverse effectiveness. Emotional vocalizations were embedded in either low or high levels of noise and presented with or without video clips of matching emotional body expressions. In both, high and low noise conditions, a reduction in auditory N100 amplitude was observed for audiovisual stimuli. However, only under high noise, the N100 peaked earlier in the audiovisual than the auditory condition, suggesting facilitatory effects as predicted by the inverse effectiveness principle. Similarly, we observed earlier N100 peaks in response to emotional compared to neutral audiovisual stimuli. This was not the case in the unimodal auditory condition. Furthermore, suppression of beta–band oscillations (15–25 Hz) primarily reflecting biological motion perception was modulated 200–400 ms after the vocalization. While larger differences in suppression between audiovisual and audio stimuli in high compared to low noise levels were found for emotional stimuli, no such difference was observed for neutral stimuli. This observation is in accordance with the inverse effectiveness principle and suggests a modulation of integration by emotional content. Overall, results show that ecologically valid, complex stimuli such as joined body and vocal expressions are effectively integrated very early in processing

    Gaze patterns in viewing static and dynamic body expressions

    Get PDF
    Evidence for the importance of bodily cues for emotion recognition has grown over the last two decades. Despite this growing literature, it is underspecified how observers view whole bodies for body expression recognition. Here we investigate to which extent body-viewing is face- and context-specific when participants are categorizing whole body expressions in static (Experiment 1) and dynamic displays (Experiment 2). Eye-movement recordings showed that observers viewed the face exclusively when visible in dynamic displays, whereas viewing was distributed over head, torso and arms in static displays and in dynamic displays with faces not visible. The strong face bias in dynamic face-visible expressions suggests that viewing of the body responds flexibly to the informativeness of facial cues for emotion categorisation. However, when facial expressions are static or not visible, observers adopt a viewing strategy that includes all upper body regions. This viewing strategy is further influenced by subtle viewing biases directed towards emotion-specific body postures and movements to optimise recruitment of diagnostic information for emotion categorisation

    ENIGMA-anxiety working group : Rationale for and organization of large-scale neuroimaging studies of anxiety disorders

    Get PDF
    Altres ajuts: Anxiety Disorders Research Network European College of Neuropsychopharmacology; Claude Leon Postdoctoral Fellowship; Deutsche Forschungsgemeinschaft (DFG, German Research Foundation, 44541416-TRR58); EU7th Frame Work Marie Curie Actions International Staff Exchange Scheme grant 'European and South African Research Network in Anxiety Disorders' (EUSARNAD); Geestkracht programme of the Netherlands Organization for Health Research and Development (ZonMw, 10-000-1002); Intramural Research Training Award (IRTA) program within the National Institute of Mental Health under the Intramural Research Program (NIMH-IRP, MH002781); National Institute of Mental Health under the Intramural Research Program (NIMH-IRP, ZIA-MH-002782); SA Medical Research Council; U.S. National Institutes of Health grants (P01 AG026572, P01 AG055367, P41 EB015922, R01 AG060610, R56 AG058854, RF1 AG051710, U54 EB020403).Anxiety disorders are highly prevalent and disabling but seem particularly tractable to investigation with translational neuroscience methodologies. Neuroimaging has informed our understanding of the neurobiology of anxiety disorders, but research has been limited by small sample sizes and low statistical power, as well as heterogenous imaging methodology. The ENIGMA-Anxiety Working Group has brought together researchers from around the world, in a harmonized and coordinated effort to address these challenges and generate more robust and reproducible findings. This paper elaborates on the concepts and methods informing the work of the working group to date, and describes the initial approach of the four subgroups studying generalized anxiety disorder, panic disorder, social anxiety disorder, and specific phobia. At present, the ENIGMA-Anxiety database contains information about more than 100 unique samples, from 16 countries and 59 institutes. Future directions include examining additional imaging modalities, integrating imaging and genetic data, and collaborating with other ENIGMA working groups. The ENIGMA consortium creates synergy at the intersection of global mental health and clinical neuroscience, and the ENIGMA-Anxiety Working Group extends the promise of this approach to neuroimaging research on anxiety disorders

    Experiences with the DOMINO office procedure system

    No full text
    Abstract: The Domino office procedure system has been equipped with a new user interface, and has been put to use for the support of purchasing. In this paper, we describe the system, the user interface, and the experiences we made during the practical use of the system. We also briefly discuss the consequences for our own research.
    corecore