7 research outputs found

    The importance of stimulus variability when studying face processing using fast periodic visual stimulation:A novel ‘mixed-emotions’ paradigm

    Get PDF
    Fast Periodic Visual Stimulation (FPVS) with oddball stimuli has been used to investigate discrimination of facial identity and emotion, with studies concluding that oddball responses indicate discrimination of faces at the conceptual level (i.e., discrimination of identity and emotion), rather than low-level perceptual (visual, image-based) discrimination. However, because previous studies have utilised identical images as base stimuli, physical differences between base and oddball stimuli, rather than recognition of identity or emotion, may have been responsible for oddball responses. This study tested two new FPVS paradigms designed to distinguish recognition of expressions of emotion from detection of visual change from the base stream. In both paradigms, the oddball emotional expression was different from that of the base stream images. However, in the ‘fixed-emotion’ paradigm, stimulus image varied at every presentation but the emotion in the base stream remained constant, and in the ‘mixed-emotions’ paradigm, both stimulus image and emotion varied at every presentation, with only the oddball emotion (disgust) remaining constant. In the fixed-emotion paradigm, typical inversion effects were observed at occipital sites. In the mixed-emotions paradigm, however, inversion effects in a central cluster (indicative of higher level emotion processing) were present in typical participants, but not those with alexithymia (who are impaired at emotion recognition), suggesting that only the mixed-emotions paradigm reflects emotion recognition rather than detection of a lower-level visual change from baseline. These results have significant methodological implications for future FPVS studies (of both facial emotion and identity), suggesting that it is crucial to vary base stimuli sufficiently, such that simple physical differences between base and oddball stimuli cannot give rise to neural oddball responses

    EEG frequency-tagging demonstrates increased left hemispheric involvement and crossmodal plasticity for face processing in congenitally deaf signers

    Get PDF
    In humans, face-processing relies on a network of brain regions predominantly in the right occipito-temporal cortex. We tested congenitally deaf (CD) signers and matched hearing controls (HC) to investigate the experience dependence of the cortical organization of face processing. Specifically, we used EEG frequency-tagging to evaluate: (1) Face-Object Categorization, (2) Emotional Facial-Expression Discrimination and (3) Individual Face Discrimination. The EEG was recorded to visual stimuli presented at a rate of 6 Hz, with oddball stimuli at a rate of 1.2 Hz. In all three experiments and in both groups, significant face discriminative responses were found. Face-Object categorization was associated to a relative increased involvement of the left hemisphere in CD individuals compared to HC individuals. A similar trend was observed for Emotional Facial-Expression discrimination but not for Individual Face Discrimination. Source reconstruction suggested a greater activation of the auditory cortices in the CD group for Individual Face Discrimination. These findings suggest that the experience dependence of the relative contribution of the two hemispheres as well as crossmodal plasticity vary with different aspects of face processing

    The effect of sad mood on early sensory event-related potentials to task-irrelevant faces

    Get PDF
    It has been shown that the perceiver's mood affects the perception of emotional faces, but it is not known how mood affects preattentive brain responses to emotional facial expressions. To examine the question, we experimentally induced sad and neutral mood in healthy adults before presenting them with task-irrelevant pictures of faces while an electroencephalography was recorded. Sad, happy, and neutral faces were presented to the participants in an ignore oddball condition. Differential responses (emotional – neutral) for the P1, N170, and P2 amplitudes were extracted and compared between neutral and sad mood conditions. Emotional facial expressions modulated all the components, and an interaction effect of expression by mood was found for P1: an emotional modulation to happy faces, which was found in neutral mood condition, disappeared in sad mood condition. For N170 and P2, we found larger response amplitudes for both emotional faces, regardless of the mood. The results add to the previous behavioral findings showing that mood already affects low-level cortical feature encoding of task-irrelevant faces.publishedVersionPeer reviewe

    A neurophysiological signature of dynamic emotion recognition associated with social communication skills and cortical gamma-aminobutyric acid levels in children

    Get PDF
    IntroductionEmotion recognition is a core feature of social perception. In particular, perception of dynamic facial emotional expressions is a major feature of the third visual pathway. However, the classical N170 visual evoked signal does not provide a pure correlate of such processing. Indeed, independent component analysis has demonstrated that the N170 component is already active at the time of the P100, and is therefore distorted by early components. Here we implemented, a dynamic face emotional paradigm to isolate a more pure face expression selective N170. We searched for a neural correlate of perception of dynamic facial emotional expressions, by starting with a face baseline from which a facial expression evolved. This allowed for a specific facial expression contrast signal which we aimed to relate with social communication abilities and cortical gamma-aminobutyric acid (GABA) levels.MethodsWe recorded event-related potentials (ERPs) and Magnetic Resonance (MRS) measures in 35 typically developing (TD) children, (10–16 years) sex-matched, during emotion recognition of an avatar morphing/unmorphing from neutral to happy/sad expressions. This task allowed for the elimination of the contribution low-level visual components, in particular the P100, by morphing baseline isoluminant neutral faces into specific expressions, isolating dynamic emotion recognition. Therefore, it was possible to isolate a dynamic face sensitive N170 devoid of interactions with earlier components.ResultsWe found delayed N170 and P300, with a hysteresis type of dependence on stimulus trajectory (morphing/unmorphing), with hemispheric lateralization. The delayed N170 is generated by an extrastriate source, which can be related to the third visual pathway specialized in biological motion processing. GABA levels in visual cortex were related with N170 amplitude and latency and predictive of worse social communication performance (SCQ scores). N170 latencies reflected delayed processing speed of emotional expressions and related to worse social communication scores.DiscussionIn sum, we found a specific N170 electrophysiological signature of dynamic face processing related to social communication abilities and cortical GABA levels. These findings have potential clinical significance supporting the hypothesis of a spectrum of social communication abilities and the identification of a specific face-expression sensitive N170 which can potentially be used in the development of diagnostic and intervention tools

    Tuning functions for automatic detection of brief changes of facial expression in the human brain

    No full text
    International audienceEfficient decoding of even brief and slight intensity facial expression changes is important for social interactions. However, robust evidence for the human brain ability to automatically detect brief and subtle changes of facial expression remains limited. Here we built on a recently developed paradigm in human electrophysiology with full-blown expressions (Dzhelyova et al., 2017), to isolate and quantify a neural marker for the detection of brief and subtle changes of facial expression. Scalp electroencephalogram (EEG) was recorded from 18 participants during stimulation of a neutral face changing randomly in size at a rapid rate of 6 Hz. Brief changes of expression appeared every five stimulation cycle (i.e., at 1.2 Hz) and expression intensity increased parametrically every 20 s in 20% steps during sweep sequences of 100 s. A significant 1.2 Hz response emerged in the EEG spectrum already at 40% of facial expression-change intensity for most of the 5 emotions tested (anger, disgust, fear, happiness, or sadness in different sequences), and increased with intensity steps, predominantly over right occipito-temporal regions. Given the high signal-to-noise ratio of the approach, thresholds for automatic detection of brief changes of facial expression could be determined for every single individual brain. A time-domain analysis revealed three components, the two first increasing linearly with increasing intensity as early as 100 ms after a change of expression, suggesting gradual low-level image-change detection prior to visual coding of facial movements. In contrast, the third component showed abrupt sensitivity to increasing expression intensity beyond 300 ms post expression-change, suggesting categorical emotion perception. Overall, this characterization of the detection of subtle changes of facial expression and its temporal dynamics open promising tracks for precise assessment of social perception ability during development and in clinical populations

    Zur Plastizität von sozio-emotionalen Kompetenzen auf Verhaltens- und Gehirnebene: Eine EEG-begleitete Trainingsstudie bei Vorschulkindern mittels des computergestützten Trainingsprogramms Zirkus Empathico

    Get PDF
    Die Förderung funktionaler sozio-emotionaler Kompetenz in der Vorschulzeit (Altersspanne 3 bis 6 Jahre) ist von entscheidender Bedeutung, um der Entstehung psychischer Störungen vorzubeugen. Bislang gibt es nur wenige Studien, die die Auswirkungen digitaler Trainings auf die sozio-emotionale Entwicklung von Vorschulkindern untersuchen. Ebenso liefert die Forschung umfangreiche Informationen über typisches sozio-emotionales Verhalten bei Vorschulkindern, während weniger darüber bekannt ist, wie das Gehirn diese Funktionen umsetzt. Ziel der Dissertation war es daher, grundlegende und komplexe Aspekte der sozio-emotionalen Kompetenz von Vorschulkindern zu untersuchen, indem ihre Reife und Trainierbarkeit mit Verhaltens- und neuronalen Maßen erfasst wurden. In den Studien 1 und 2 wurden ereigniskorrelierte Potenziale und die Fast Periodic Visual Stimulation Methode eingesetzt, um neuronale Mechanismen der Emotionserkennung zu quantifizieren. Beide Studien ergaben das Vorhandensein grundlegender Mechanismen der Emotionserkennung in dieser Altersgruppe. Darüber hinaus zeigten Vorschulkinder einen Verarbeitungsvorteil von fröhlichen gegenüber ärgerlichen oder neutralen Gesichtern. Studie 3 untersuchte die Trainierbarkeit sozio-emotionaler Kompetenz anhand des digitalen Trainings Zirkus Empathico. Die Zirkus-Empathico-Gruppe zeigte im Vergleich zur Kontrollgruppe einen Anstieg sowohl der grundlegenden als auch der komplexen sozio-emotionalen Kompetenzen. Darüber hinaus ergab sich für die Zirkus-Empathico-Gruppe auf der neuronalen Ebene einen Verarbeitungsvorteil für fröhliche Gesichter. Zusammenfassend zeigt sich ein erheblicher Nutzen neuronaler Marker für das Verständnis von Mechanismen, welchen der Emotionserkennung von Vorschulkindern zugrunde liegen. Die vielversprechende Evidenz für die Wirksamkeit eines digitalen sozio-emotionalen Kompetenztrainings ermöglicht darüber hinaus weitere Überlegungen zur Nachhaltigkeit der Effekte sowie der gesellschaftlichen Bedeutung.Promoting functional socio-emotional competence in the preschool years (age range 3 to 6 years) is crucial to prevent the development of psychological disorders. To date, there are few studies examining the effects of digital training on the socio-emotional development of preschool children. Similarly, research provides extensive information on typical socio-emotional behaviors in preschool children, while less is known about how the brain implements these functions. Therefore, the goal of this dissertation was to examine fundamental and complex aspects of preschoolers' socio-emotional competence by assessing their maturity and trainability with behavioral and neuronal measures. Studies 1 and 2 used event-related potentials and the Fast Periodic Visual Stimulation method to quantify neural mechanisms of emotion recognition. Both studies revealed the presence of basic emotion recognition mechanisms in this age group. In addition, preschoolers showed a processing advantage of happy over angry or neutral faces. Study 3 investigated the trainability of socio-emotional competence using the digital training Zirkus Empathico. The Zirkus Empathico group showed an increase in both basic and complex socio-emotional competencies compared to the control group. In addition, the Zirkus Empathico group showed a processing advantage for happy faces at the neuronal level. In summary, neuronal markers show considerable utility for understanding mechanisms underlying emotion recognition in preschool children. The promising evidence for the efficacy of digital socio-emotional skills training also allows further consideration of the sustainability of the effects as well as the societal significance

    Neurophysiological assessments of low-level and high-level interdependencies between auditory and visual systems in the human brain

    Get PDF
    This dissertation investigates the functional interplay between visual and auditory systems and its degree of experience-dependent plasticity. To function efficiently in everyday life, we must rely on our senses, building complex hierarchical representations about the environment. Early sensory deprivation, congenital (from birth) or within the first year of life, is a key model to study sensory experience and the degree of compensatory reorganizations (i.e., neuroplasticity). Neuroplasticity can be intramodal (within the sensory system) and crossmodal (the recruitment of deprived cortical areas for remaining senses). However, the exact role of early sensory experience and the mechanisms guiding experience-driven plasticity need further investigation. To this aim, we performed three electroencephalographic studies, considering the aspects: 1) sensory modality (auditory/visual), 2) hierarchy of the brain functional organization (low-/high-level), and 3)sensory deprivation (deprived/non-deprived cortices). The first study explored how early auditory experience affects low-level visual processing, using time-frequency analysis on the data of early deaf individuals and their hearing counterparts. The second study investigated experience- dependent plasticity in hierarchically organized face processing, applying fast periodic visual stimulation in congenitally deaf signers and their hearing controls. The third study assessed neural responses of blindfolded participants, using naturalistic stimuli together with temporal response function, and evaluated neural tracking in hierarchically organized speech processing when retinal input is absent, focusing on the role of the visual cortex. The results demonstrate the importance of atypical early sensory experience in shaping (via intra-and crossmodal changes) the brain organization at various hierarchical stages of sensory processing but also support the idea that some crossmodal effects emerge even with typical experience. This dissertation provides new insights into understanding the functional interplay between visual and auditory systems and the related mechanisms driving experience-dependent plasticity and may contribute to the development of sensory restoration tools and rehabilitation strategies for sensory-typical and sensory-deprived populations
    corecore