31 research outputs found

    Supplementary

    No full text

    Data

    No full text

    d-primes

    No full text

    Blindness impairs the perception of specific vocal emotion expressions

    No full text
    Voices are one of the most socially salient stimuli in our everyday environment, and they are arguably even more crucial in the lives of individuals who are blind from birth. Blind have demonstrated superior abilities in several aspects of auditory perception, but research on their behavioral performance when discriminating vocal features is still scarce and has given so far unclear results. In the present study, we want to test the hypothesis that congenitally blind (N=16) would outperform the individually-matched sighted controls at the discrimination of a crucial feature in human voices: emotional content. In order to do so, we have relied on a gating psychophysical paradigm where we have presented small segments of emotional non-linguistic vocalizations of increasing duration, from 100 to 400ms, portraying five basic emotions, and we have asked our participants for an explicit emotion categorization task. The analysis of the sensitivity indices of the two groups at the discrimination task shows an advantage in the sighted group in two emotional categories: anger and fear. This result supports the view that, for specific threat-related emotions, vision might play a calibrating role for the other sensory channels for a task where vision typically dominates the discriminative process over audition; such calibration is absent in blind individuals. Additionally, an analysis of the average confusion patterns reveals very strong linear correlations between the incorrect responses of the two groups, but only for stimuli longer than 200ms, when enough information is delivered to lead to a more consistent emotion categorization process. This suggests that, despite the highly different sensory experiences, the two groups seem to implement a similar strategy when discriminating vocal emotions, allegedly relying vastly on auditory features

    fedefalag_brainhack2020_public_project

    No full text

    Analyses

    No full text

    Multisensory perception of vocal and facial expressions of emotions

    No full text
    Emotions have a pivotal role in our lives, and we massively express and perceive them through faces and voices. The present thesis investigates the perception and representation of emotion expressions in various contexts. In the first study, we investigated the performance of neurotypical individuals at discriminating dynamic facial and vocal emotions, with specific attention given to the time component of dynamic expressions, showing how the amount of information needed for a discriminatory decision unfolds faster in vision than audition, but always fastest in a multisensory context. In the second study we investigated the neural correlates of the perception of unimodal and multimodal expressions through functional-MRI. In this study we show how emotion information is represented in a widespread fashion throughout the regions of the face and voice processing networks. We additionally demonstrate how several of these regions not only represent their native modality, but the opposite sensory modality as well, with some doing so in a supramodal fashion, i.e. independently of the sensory modality of the input. In the third study we investigate whether visual perception is necessary for development of emotions’ discrimination through voices by testing early blind and sighted individuals. We were able to show that, although the behavioral profile is similar across the two groups for the investigated emotion categories, blindness affected the performance in specific threat-related vocal emotions.(PSYE - Sciences psychologiques et de l'éducation) -- UCL, 202

    IVC

    No full text

    Time-resolved discrimination of audio-visual emotion expressions _ Open Material

    No full text

    Time-resolved discrimination of audio-visual emotion expressions

    No full text
    Humans extract and integrate the emotional content delivered through faces and voices of others. It is however poorly understood how perceptual decisions unfold in time when people discriminate the expression of emotions transmitted using dynamic facial and vocal signals, as in natural social context. In this study, we relied on a gating paradigm to track how the recognition of emotion expressions across the senses unfolds over exposure time. We first demonstrate that across all emotions tested, a discriminatory decision is reached earlier with faces than with voices. Importantly, multisensory stimulation consistently reduced the required accumulation of perceptual evidences needed to reach correct discrimination (Isolation Point). We also observed that expressions with different emotional content provide cumulative evidence at different speeds, with Fear being the expression with the fastest isolation point across the senses. Finally, the lack of correlation between the confusion patterns in response to facial and vocal signals across time suggest distinct relations between the discriminative features extracted from the two signals. Altogether, these results provide a comprehensive view on how auditory, visual and audiovisual information related to different emotion expressions accumulates in time, highlighting how multisensory context can fasten the discrimination process when minimal information is available
    corecore