5 research outputs found

    Multisensory emotion perception in congenitally, early, and late deaf CI users.

    No full text
    Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: 3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences

    Perceived emotion intensity in the voice and the face task.

    No full text
    <p>Emotion intensity ratings (1 = <i>low</i>, 5 = <i>high</i>) in the CD CI users and their controls (n = 14), ED CI users and their controls (n = 14), and LD CI users and their controls (n = 25), separately for task (Voice task, Face task) and condition (unimodal, congruent, incongruent). Error bars denote standard deviations. (Marginally) significant condition differences are indicated accordingly.</p

    IES condition differences (face task).

    No full text
    <p>Inverse efficiency scores (IES, ms) in each condition (unimodal, congruent, incongruent) of the Face task in the CD CI users and their controls (n = 14), ED CI users and their controls (n = 14), and LD CI users and their controls (n = 25). Error bars denote standard deviations. (Marginally) significant condition differences are indicated accordingly.</p

    IES (In)congruency effects (voice task).

    No full text
    <p>Congruency and incongruency effects in inverse efficiency scores (IES, ms) in the CD (n = 7), ED (n = 7), and LD (n = 13) CI users and their respective controls in the Voice task. Error bars denote standard deviations. (Marginally) significant group differences are indicated accordingly.</p
    corecore