11 research outputs found

    Multisensory emotion perception in congenitally, early, and late deaf CI users.

    No full text
    Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: 3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences

    Audio-tactile integration in congenitally and late deaf cochlear implant users.

    No full text
    Several studies conducted in mammals and humans have shown that multisensory processing may be impaired following congenital sensory loss and in particular if no experience is achieved within specific early developmental time windows known as sensitive periods. In this study we investigated whether basic multisensory abilities are impaired in hearing-restored individuals with deafness acquired at different stages of development. To this aim, we tested congenitally and late deaf cochlear implant (CI) recipients, age-matched with two groups of hearing controls, on an audio-tactile redundancy paradigm, in which reaction times to unimodal and crossmodal redundant signals were measured. Our results showed that both congenitally and late deaf CI recipients were able to integrate audio-tactile stimuli, suggesting that congenital and acquired deafness does not prevent the development and recovery of basic multisensory processing. However, we found that congenitally deaf CI recipients had a lower multisensory gain compared to their matched controls, which may be explained by their faster responses to tactile stimuli. We discuss this finding in the context of reorganisation of the sensory systems following sensory loss and the possibility that these changes cannot be "rewired" through auditory reafferentation

    eLearning Projekt Biologische Psychologie

    No full text
    NonPeerReviewe

    IES condition differences (face task).

    No full text
    <p>Inverse efficiency scores (IES, ms) in each condition (unimodal, congruent, incongruent) of the Face task in the CD CI users and their controls (n = 14), ED CI users and their controls (n = 14), and LD CI users and their controls (n = 25). Error bars denote standard deviations. (Marginally) significant condition differences are indicated accordingly.</p

    Perceived emotion intensity in the voice and the face task.

    No full text
    <p>Emotion intensity ratings (1 = <i>low</i>, 5 = <i>high</i>) in the CD CI users and their controls (n = 14), ED CI users and their controls (n = 14), and LD CI users and their controls (n = 25), separately for task (Voice task, Face task) and condition (unimodal, congruent, incongruent). Error bars denote standard deviations. (Marginally) significant condition differences are indicated accordingly.</p

    IES (In)congruency effects (voice task).

    No full text
    <p>Congruency and incongruency effects in inverse efficiency scores (IES, ms) in the CD (n = 7), ED (n = 7), and LD (n = 13) CI users and their respective controls in the Voice task. Error bars denote standard deviations. (Marginally) significant group differences are indicated accordingly.</p

    Multisensory facilitation indexed by violation of the race model.

    No full text
    <p>Cumulative distribution functions for response time to unisensory auditory and tactile stimuli and crossmodal stimuli for a. congenital deaf CI recipients and their age-matched controls and b. late deaf CI recipients and their age-matched controls. The filled black line indicates the summed proportions to unimodal stimuli (<i>a+t</i>, race model), <i>at</i> the violation of the race model, and <i>a</i> and <i>t</i> the responses to single auditory and tactile stimuli, respectively.</p

    Correlation between unisensory tactile stimuli and age at deafness onset.

    No full text
    <p>Reaction times (in ms) to unisensory tactile stimuli as a function of age at deafness onset (in years).</p
    corecore