87 research outputs found

    Visual word form processing deficits driven by severity of reading impairments in children with developmental dyslexia

    Get PDF
    The visual word form area (VWFA) in the left ventral occipito-temporal (vOT) cortex is key to fluent reading in children and adults. Diminished VWFA activation during print processing tasks is a common finding in subjects with severe reading problems. Here, we report fMRI data from a multicentre study with 140 children in primary school (7.9-12.2 years;55 children with dyslexia, 73 typical readers, 12 intermediate readers). All performed a semantic task on visually presented words and a matched control task on symbol strings. With this large group of children, including the entire spectrum from severely impaired to highly fluent readers, we aimed to clarify the association of reading fluency and left vOT activation during visual word processing. The results of this study confirm reduced word-sensitive activation within the left vOT in children with dyslexia. Interestingly, the association of reading skills and left vOT activation was especially strong and spatially extended in children with dyslexia. Thus, deficits in basic visual word form processing increase with the severity of reading disability but seem only weakly associated with fluency within the typical reading range suggesting a linear dependence of reading scores with VFWA activation only in the poorest readers

    Communicative-Pragmatic Assessment Is Sensitive and Time-Effective in Measuring the Outcome of Aphasia Therapy

    Get PDF
    A range of methods in clinical research aim to assess treatment-induced progress in aphasia therapy. Here, we used a crossover randomized controlled design to compare the suitability of utterance-centered and dialogue-sensitive outcome measures in speech-language testing. Fourteen individuals with post- stroke chronic non-fluent aphasia each received two types of intensive training in counterbalanced order: conventional confrontation naming, and communicative-pragmatic speech-language therapy (Intensive Language-Action Therapy, an expanded version of Constraint-Induced Aphasia Therapy). Motivated by linguistic-pragmatic theory and neuroscience data, our dependent variables included a newly created diagnostic instrument, the Action Communication Test (ACT). This diagnostic instrument requires patients to produce target words in two conditions: (i) utterance-centered object naming, and (ii) communicative- pragmatic social interaction based on verbal requests. In addition, we administered a standardized aphasia test battery, the Aachen Aphasia Test (AAT). Composite scores on the ACT and the AAT revealed similar patterns of changes in language performance over time, irrespective of the treatment applied. Changes in language performance were relatively consistent with the AAT results also when considering both ACT subscales separately from each other. However, only the ACT subscale evaluating verbal requests proved to be successful in distinguishing between different types of training in our patient sample. Critically, testing duration was substantially shorter for the entire ACT (10–20 min) than for the AAT (60–90 min). Taken together, the current findings suggest that communicative-pragmatic methods in speech- language testing provide a sensitive and time-effective measure to determine the outcome of aphasia therapy

    fMR-adaptation indicates selectivity to audiovisual content congruency in distributed clusters in human superior temporal cortex

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Efficient multisensory integration is of vital importance for adequate interaction with the environment. In addition to basic binding cues like temporal and spatial coherence, meaningful multisensory information is also bound together by content-based associations. Many functional Magnetic Resonance Imaging (fMRI) studies propose the (posterior) superior temporal cortex (STC) as the key structure for integrating meaningful multisensory information. However, a still unanswered question is how superior temporal cortex encodes content-based associations, especially in light of inconsistent results from studies comparing brain activation to semantically matching (congruent) versus nonmatching (incongruent) multisensory inputs. Here, we used fMR-adaptation (fMR-A) in order to circumvent potential problems with standard fMRI approaches, including spatial averaging and amplitude saturation confounds. We presented repetitions of audiovisual stimuli (letter-speech sound pairs) and manipulated the associative relation between the auditory and visual inputs (congruent/incongruent pairs). We predicted that if multisensory neuronal populations exist in STC and encode audiovisual content relatedness, adaptation should be affected by the manipulated audiovisual relation.</p> <p>Results</p> <p>The results revealed an occipital-temporal network that adapted independently of the audiovisual relation. Interestingly, several smaller clusters distributed over superior temporal cortex within that network, adapted stronger to congruent than to incongruent audiovisual repetitions, indicating sensitivity to content congruency.</p> <p>Conclusions</p> <p>These results suggest that the revealed clusters contain multisensory neuronal populations that encode content relatedness by selectively responding to congruent audiovisual inputs, since unisensory neuronal populations are assumed to be insensitive to the audiovisual relation. These findings extend our previously revealed mechanism for the integration of letters and speech sounds and demonstrate that fMR-A is sensitive to multisensory congruency effects that may not be revealed in BOLD amplitude per se.</p

    What infant-directed speech tells us about the development of compensation for assimilation

    Get PDF
    In speech addressed to adults, words are seldom realized in their canonical, or citation, form. For example, the word ‘green’ in the phrase ‘green beans’ can often be realized as ‘greem’ due to English place assimilation, where word-final coronals take on the place of articulation of neighboring velars. In such a situation, adult listeners readily ‘undo’ the assimilatory process and perceive the underlying intended lexical form of ‘greem’ (i.e. they access the lexical representation ‘green’). An interesting developmental question is how children, with their limited lexical knowledge, come to cope with phonologically conditioned connected speech processes such as place assimilation. Here, we begin to address this issue by examining the occurrence of place assimilation in the input to English-learning 18-month-olds. Perceptual and acoustic analyses of elicited speech, as well as analysis of a corpus of spontaneous speech, all converge on the finding that caregivers do not spoon-feed their children canonical tokens of words. Rather, infant-directed speech contains just as many non-canonical realizations of words in place assimilation contexts as adult-directed speech. Implications for models of developmental speech perception are discussed
    corecore