15 research outputs found

    Predictors of Word and Text Reading Fluency of Deaf Children in Bilingual Deaf Education Programmes

    Get PDF
    Published: 25 February 2022Reading continues to be a challenging task for most deaf children. Bimodal bilingual education creates a supportive environment that stimulates deaf children’s learning through the use of sign language. However, it is still unclear how exposure to sign language might contribute to improving reading ability. Here, we investigate the relative contribution of several cognitive and linguistic variables to the development of word and text reading fluency in deaf children in bimodal bilingual education programmes. The participants of this study were 62 school-aged (8 to 10 years old at the start of the 3-year study) deaf children who took part in bilingual education (using Dutch and Sign Language of The Netherlands) and 40 age-matched hearing children. We assessed vocabulary knowledge in speech and sign, phonological awareness in speech and sign, receptive fingerspelling ability, and short-term memory at time 1 (T1). At times 2 (T2) and 3 (T3), we assessed word and text reading fluency. We found that (1) speech-based vocabulary strongly predicted word and text reading at T2 and T3, (2) fingerspelling ability was a strong predictor of word and text reading fluency at T2 and T3, (3) speech-based phonological awareness predicted word reading accuracy at T2 and T3 but did not predict text reading fluency, and (4) fingerspelling and STM predicted word reading latency at T2 while sign-based phonological awareness predicted this outcome measure at T3. These results suggest that fingerspelling may have an important function in facilitating the construction of orthographical/phonological representations of printed words for deaf children and strengthening word decoding and recognition abilitiesThis research received no external funding

    Investigating language lateralization during phonological and semantic fluency tasks using functional transcranial Doppler sonography.

    Get PDF
    Although there is consensus that the left hemisphere plays a critical role in language processing, some questions remain. Here we examine the influence of overt versus covert speech production on lateralization, the relationship between lateralization and behavioural measures of language performance and the strength of lateralization across the subcomponents of language. The present study used functional transcranial Doppler sonography (fTCD) to investigate lateralization of phonological and semantic fluency during both overt and covert word generation in right-handed adults. The laterality index (LI) was left lateralized in all conditions, and there was no difference in the strength of LI between overt and covert speech. This supports the validity of using overt speech in fTCD studies, another benefit of which is a reliable measure of speech production

    Evidence for shared conceptual representations for sign and speech

    Get PDF
    Do different languages evoke different conceptual representations? If so, greatest divergence might be expected between languages that differ most in structure, such as sign and speech. Unlike speech bilinguals, hearing sign-speech bilinguals use languages conveyed in different modalities. We used functional magnetic resonance imaging and representational similarity analysis (RSA) to quantify the similarity of semantic representations elicited by the same concepts presented in spoken British English and British Sign Language in hearing, early sign-speech bilinguals. We found shared representations for semantic categories in left posterior middle and inferior temporal cortex. Despite shared category representations, the same spoken words and signs did not elicit similar neural patterns. Thus, contrary to previous univariate activation-based analyses of speech and sign perception, we show that semantic representations evoked by speech and sign are only partially shared. This demonstrates the unique perspective that sign languages and RSA provide in understanding how language influences conceptual representation

    Stimulus rate increases lateralisation in linguistic and non-linguistic tasks measured by functional transcranial Doppler sonography

    Get PDF
    Studies to date that have used fTCD to examine language lateralisation have predominantly used word or sentence generation tasks. Here we sought to further assess the sensitivity of fTCD to language lateralisation by using a metalinguistic task which does not involve novel speech generation: rhyme judgement in response to written words. Line array judgement was included as a non-linguistic visuospatial task to examine the relative strength of left and right hemisphere lateralisation within the same individuals when output requirements of the tasks are matched. These externally paced tasks allowed us to manipulate the number of stimuli presented to participants and thus assess the influence of pace on the strength of lateralisation.In Experiment 1, 28 right-handed adults participated in rhyme and line array judgement tasks and showed reliable left and right lateralisation at the group level for each task, respectively. In Experiment 2 we increased the pace of the tasks, presenting more stimuli per trial. We measured laterality indices (LIs) from 18 participants who performed both linguistic and non-linguistic judgement tasks during the original 'slow' presentation rate (5 judgements per trial) and a fast presentation rate (10 judgements per trial). The increase in pace led to increased strength of lateralisation in both the rhyme and line conditions.Our results demonstrate for the first time that fTCD is sensitive to the left lateralised processes involved in metalinguistic judgements. Our data also suggest that changes in the strength of language lateralisation, as measured by fTCD, are not driven by articulatory demands alone. The current results suggest that at least one aspect of task difficulty, the pace of stimulus presentation, influences the strength of lateralisation during both linguistic and non-linguistic tasks

    The time course of processing handwritten words: An ERP investigation

    Get PDF
    Behavioral studies have shown that the legibility of handwritten script hinders visual word recognition. Furthermore, when compared with printed words, lexical effects (e.g., word-frequency effect) are magnified for less intelligible (difficult) handwriting (Barnhart & Goldinger, 2010; Perea et al., 2016). This boost has been interpreted in terms of greater influence of top-down mechanisms during visual word recognition. In the present experiment, we registered the participants' ERPs to uncover top-down processing effects on early perceptual encoding. Participants' behavioral and EEG responses were recorded to high- and low-frequency words that varied in script's legibility (printed, easy handwritten, difficult handwritten) in a lexical decision experiment. Behavioral results replicated previous findings: word-frequency effects were larger in difficult handwriting than in easy handwritten or printed conditions. Critically, the ERP data showed an early effect of word-frequency in the N170 that was restricted to the difficult-to-read handwritten condition. These results are interpreted in terms of increased attentional deployment when the bottom-up signal is weak (difficult handwritten stimuli). This attentional boost would enhance top-down effects (e.g., lexical effects) in the early stages of visual word processing

    Language lateralization of hearing native signers: A functional transcranial Doppler sonography (fTCD) study of speech and sign production

    Get PDF
    Neuroimaging studies suggest greater involvement of the left parietal lobe in sign language compared to speech production. This stronger activation might be linked to the specific demands of sign encoding and proprioceptive monitoring. In Experiment 1 we investigate hemispheric lateralization during sign and speech generation in hearing native users of English and British Sign Language (BSL). Participants exhibited stronger lateralization during BSL than English production. In Experiment 2 we investigated whether this increased lateralization index could be due exclusively to the higher motoric demands of sign production. Sign naïve participants performed a phonological fluency task in English and a non-sign repetition task. Participants were left lateralized in the phonological fluency task but there was no consistent pattern of lateralization for the non-sign repetition in these hearing non-signers. The current data demonstrate stronger left hemisphere lateralization for producing signs than speech, which was not primarily driven by motoric articulatory demands

    How do face masks impact communication amongst deaf/HoH people?

    Get PDF
    Face coverings have been key in reducing the spread of COVID-19. At the same time, they have hindered interpersonal communication, particularly for those who rely on speechreading to aid communication. The available research indicated that deaf/hard of hearing (HoH) people experienced great difficulty communicating with people wearing masks and negative effects on wellbeing. Here we extended these findings by exploring which factors predict deaf/HoH people’s communication difficulties, loss of information, and wellbeing. We also explored the factors predicting perceived usefulness of transparent face coverings and alternative ways of communicating. We report the findings from an accessible survey study, released in two written and three signed languages. Responses from 395 deaf/HoH UK and Spanish residents were collected online at a time when masks were mandatory. We investigated whether onset and level of deafness, knowledge of sign language, speechreading fluency, and country of residence predicted communication difficulties, wellbeing, and degree to which transparent face coverings were considered useful. Overall, deaf/HoH people and their relatives used masks most of the time despite greater communication difficulties. Late-onset deaf people were the group that experienced more difficulties in communication, and also reported lower wellbeing. However, both early- and late-onset deaf people reported missing more information and feeling more disconnected from society than HoH people. Finally, signers valued transparent face shields more positively than non-signers. The latter suggests that, while seeing the lips is positive to everyone, signers appreciate seeing the whole facial expression. Importantly, our data also revealed the importance of visual communication other than speechreading to facilitate face-to-face interactions

    Early use of phonological codes in deaf readers: An ERP study

    Get PDF
    Previous studies suggest that deaf readers use phonological information of words when it is explicitly demanded by the task itself. However, whether phonological encoding is automatic remains controversial. The present experiment examined whether adult congenitally deaf readers show evidence of automatic use of phonological information during visual word recognition. In an ERP masked priming lexical decision experiment, deaf participants responded to target words preceded by a pseudohomophone (koral – CORAL) or an orthographic control prime (toral – CORAL). Responses were faster for the pseudohomophone than for the orthographic control condition. The N250 and N400 amplitudes were reduced for the pseudohomophone when compared to the orthographic control condition. Furthermore, the magnitude of both the behavioral and the ERP pseudohomophone effects in deaf readers was similar to that of a group of well-matched hearing controls. These findings reveal that phonological encoding is available to deaf readers from the early stages of visual word recognition. Finally, the pattern of correlations of phonological priming with reading ability suggested that the amount of sub-lexical use of phonological information could be a main contributor to reading ability for hearing but not for deaf readers

    Charla UCV, Abril 2019, subtitulada

    No full text
    corecore