16 research outputs found

    Somatosensory processing in deaf and deafblind individuals: How does the brain adapt as a function of sensory and linguistic experience? A critical review

    Get PDF
    How do deaf and deafblind individuals process touch? This question offers a unique model to understand the prospects and constraints of neural plasticity. Our brain constantly receives and processes signals from the environment and combines them into the most reliable information content. The nervous system adapts its functional and structural organization according to the input, and perceptual processing develops as a function of individual experience. However, there are still many unresolved questions regarding the deciding factors for these changes in deaf and deafblind individuals, and so far, findings are not consistent. To date, most studies have not taken the sensory and linguistic experiences of the included participants into account. As a result, the impact of sensory deprivation vs. language experience on somatosensory processing remains inconclusive. Even less is known about the impact of deafblindness on brain development. The resulting neural adaptations could be even more substantial, but no clear patterns have yet been identified. How do deafblind individuals process sensory input? Studies on deafblindness have mostly focused on single cases or groups of late-blind individuals. Importantly, the language backgrounds of deafblind communities are highly variable and include the usage of tactile languages. So far, this kind of linguistic experience and its consequences have not been considered in studies on basic perceptual functions. Here, we will provide a critical review of the literature, aiming at identifying determinants for neuroplasticity and gaps in our current knowledge of somatosensory processing in deaf and deafblind individuals

    EEG frequency-tagging demonstrates increased left hemispheric involvement and crossmodal plasticity for face processing in congenitally deaf signers

    Get PDF
    In humans, face-processing relies on a network of brain regions predominantly in the right occipito-temporal cortex. We tested congenitally deaf (CD) signers and matched hearing controls (HC) to investigate the experience dependence of the cortical organization of face processing. Specifically, we used EEG frequency-tagging to evaluate: (1) Face-Object Categorization, (2) Emotional Facial-Expression Discrimination and (3) Individual Face Discrimination. The EEG was recorded to visual stimuli presented at a rate of 6 Hz, with oddball stimuli at a rate of 1.2 Hz. In all three experiments and in both groups, significant face discriminative responses were found. Face-Object categorization was associated to a relative increased involvement of the left hemisphere in CD individuals compared to HC individuals. A similar trend was observed for Emotional Facial-Expression discrimination but not for Individual Face Discrimination. Source reconstruction suggested a greater activation of the auditory cortices in the CD group for Individual Face Discrimination. These findings suggest that the experience dependence of the relative contribution of the two hemispheres as well as crossmodal plasticity vary with different aspects of face processing

    BUA Obj 5 Pilot study

    No full text

    Multisensory emotion perception in congenitally, early, and late deaf CI users.

    No full text
    Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: 3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences

    Audio-tactile integration in congenitally and late deaf cochlear implant users.

    No full text
    Several studies conducted in mammals and humans have shown that multisensory processing may be impaired following congenital sensory loss and in particular if no experience is achieved within specific early developmental time windows known as sensitive periods. In this study we investigated whether basic multisensory abilities are impaired in hearing-restored individuals with deafness acquired at different stages of development. To this aim, we tested congenitally and late deaf cochlear implant (CI) recipients, age-matched with two groups of hearing controls, on an audio-tactile redundancy paradigm, in which reaction times to unimodal and crossmodal redundant signals were measured. Our results showed that both congenitally and late deaf CI recipients were able to integrate audio-tactile stimuli, suggesting that congenital and acquired deafness does not prevent the development and recovery of basic multisensory processing. However, we found that congenitally deaf CI recipients had a lower multisensory gain compared to their matched controls, which may be explained by their faster responses to tactile stimuli. We discuss this finding in the context of reorganisation of the sensory systems following sensory loss and the possibility that these changes cannot be "rewired" through auditory reafferentation

    Perceived emotion intensity in the voice and the face task.

    No full text
    <p>Emotion intensity ratings (1 = <i>low</i>, 5 = <i>high</i>) in the CD CI users and their controls (n = 14), ED CI users and their controls (n = 14), and LD CI users and their controls (n = 25), separately for task (Voice task, Face task) and condition (unimodal, congruent, incongruent). Error bars denote standard deviations. (Marginally) significant condition differences are indicated accordingly.</p

    IES condition differences (face task).

    No full text
    <p>Inverse efficiency scores (IES, ms) in each condition (unimodal, congruent, incongruent) of the Face task in the CD CI users and their controls (n = 14), ED CI users and their controls (n = 14), and LD CI users and their controls (n = 25). Error bars denote standard deviations. (Marginally) significant condition differences are indicated accordingly.</p
    corecore