15 research outputs found

    Monitoring different phonological parameters of sign language engages the same cortical language network but distinctive perceptual ones

    Get PDF
    The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and task-related brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production

    Differential activity in Heschl's gyrus between deaf and hearing individuals is due to auditory deprivation rather than language modality

    Get PDF
    Sensory cortices undergo crossmodal reorganisation as a consequence of sensory deprivation. Congenital deafness in humans represents a particular case with respect to other types of sensory deprivation, because cortical reorganisation is not only a consequence of auditory deprivation, but also of language-driven mechanisms. Visual crossmodal plasticity has been found in secondary auditory cortices of deaf individuals, but it is still unclear if reorganisation also takes place in primary auditory areas, and how this relates to language modality and auditory deprivation.  Here, we dissociated the effects of language modality and auditory deprivation on crossmodal plasticity in Heschl's gyrus as a whole, and in cytoarchitectonic region Te1.0 (likely to contain the core auditory cortex). Using fMRI, we measured the BOLD response to viewing sign language in congenitally or early deaf individuals with and without sign language knowledge, and in hearing controls.  Results show that differences between hearing and deaf individuals are due to a reduction in activation caused by visual stimulation in the hearing group, which is more significant in Te1.0 than in Heschl's gyrus as a whole. Furthermore, differences between deaf and hearing groups are due to auditory deprivation, and there is no evidence that the modality of language used by deaf individuals contributes to crossmodal plasticity in Heschl's gyrus

    The signing brain: the neurobiology of sign language

    No full text
    Most of our knowledge about the neurobiological bases of language comes from studies of spoken languages. By studying signed languages, we can determine whether what we have learnt so far is characteristic of language per se or whether it is specific to languages that are spoken and heard. Overwhelmingly, lesion and neuroimaging studies suggest that the neural systems supporting signed and spoken language are very similar: both involve a predominantly left-lateralised perisylvian network. More recent studies have also highlighted processing differences between languages in these different modalities. These studies provide rich insights into language and communication processes in deaf and hearing people

    Preexisting semantic representation improves working memory performance in the visuospatial domain

    Get PDF
    Working memory (WM) for spoken language improves when the to-be-remembered items correspond to preexisting representations in long-term memory. We investigated whether this effect generalizes to the visuospatial domain by administering a visual n-back WM task to deaf signers and hearing signers, as well as to hearing nonsigners. Four different kinds of stimuli were presented: British Sign Language (BSL; familiar to the signers), Swedish Sign Language (SSL; unfamiliar), nonsigns, and nonlinguistic manual actions. The hearing signers performed better with BSL than with SSL, demonstrating a facilitatory effect of preexisting semantic representation. The deaf signers also performed better with BSL than with SSL, but only when WM load was high. No effect of preexisting phonological representation was detected. The deaf signers performed better than the hearing nonsigners with all sign-based materials, but this effect did not generalize to nonlinguistic manual actions. We argue that deaf signers, who are highly reliant on visual information for communication, develop expertise in processing sign-based items, even when those items do not have preexisting semantic or phonological representations. Preexisting semantic representation, however, enhances the quality of the gesture-based representations temporarily maintained in WM by this group, thereby releasing WM resources to deal with increased load. Hearing signers, on the other hand, may make strategic use of their speech-based representations for mnemonic purposes. The overall pattern of results is in line with flexible-resource models of WM.Funding agencies: Riksbankens jubileumsfond [P2008-0481:1-E]; Economic and Social Research Council of Great Britain [RES-620-28-6001, RES-620-28-0002]</p

    Dissociating cognitive and sensory neural plasticity in human superior temporal cortex

    Get PDF
    Disentangling the effects of sensory and cognitive factors on neural reorganization is fundamental for establishing the relationship between plasticity and functional specialization. Auditory deprivation in humans provides a unique insight into this problem, because the origin of the anatomical and functional changes observed in deaf individuals is not only sensory, but also cognitive, owing to the implementation of visual communication strategies such as sign language and speechreading. Here, we describe a functional magnetic resonance imaging study of individuals with different auditory deprivation and sign language experience. We find that sensory and cognitive experience cause plasticity in anatomically and functionally distinguishable substrates. This suggests that after plastic reorganization, cortical regions adapt to process a different type of input signal, but preserve the nature of the computation they perform, both at a sensory and cognitive level

    Brain systems mediating semantic and syntactic processing in deaf native signers: Biological invariance and modality specificity

    No full text
    Studies of written and spoken language suggest that nonidentical brain networks support semantic and syntactic processing. Event-related brain potential (ERP) studies of spoken and written languages show that semantic anomalies elicit a posterior bilateral N400, whereas syntactic anomalies elicit a left anterior negativity, followed by a broadly distributed late positivity. The present study assessed whether these ERP indicators index the activity of language systems specific for the processing of aural-oral language or if they index neural systems underlying any natural language, including sign language. The syntax of a signed language is mediated through space. Thus the question arises of whether the comprehension of a signed language requires neural systems specific for this kind of code. Deaf native users of American Sign Language (ASL) were presented signed sentences that were either correct or that contained either a semantic or a syntactic error (1 of 2 types of verb agreement errors). ASL sentences were presented at the natural rate of signing, while the electroencephalogram was recorded. As predicted on the basis of earlier studies, an N400 was elicited by semantic violations. In addition, signed syntactic violations elicited an early frontal negativity and a later posterior positivity. Crucially, the distribution of the anterior negativity varied as a function of the type of syntactic violation, suggesting a unique involvement of spatial processing in signed syntax. Together, these findings suggest that biological constraints and experience shape the development of neural systems important for language

    Similar digit-based working memory in deaf signers and hearing non-signers despite digit span differences

    Get PDF
    Similar working memory (WM) for lexical items has been demonstrated for signers and non-signers while short-term memory (STM) is regularly poorer in deaf than hearing individuals. In the present study, we investigated digit-based WM and STM in Swedish and British deaf signers and hearing non-signers. To maintain good experimental control we used printed stimuli throughout and held response mode constant across groups. We showed that deaf signers have similar digit-based WM performance, despite shorter digit spans, compared to well-matched hearing non-signers. We found no difference between signers and non-signers on STM span for letters chosen to minimize phonological similarity or in the effects of recall direction. This set of findings indicates that similar WM for signers and non-signers can be generalized from lexical items to digits and suggests that poorer STM in deaf signers compared to hearing non-signers may be due to differences in phonological similarity across the language modalities of sign and speech

    Superior temporal activation as a function of linguistic knowledge: Insights from deaf native signers who speechread

    Get PDF
    Studies of spoken and signed language processing reliably show involvement of the posterior superior temporal cortex. This region is also reliably activated by observation of meaningless oral and manual actions. In this study we directly compared the extent to which activation in posterior superior temporal cortex is modulated by linguistic knowledge irrespective of differences in language form. We used a novel cross-linguistic approach in two groups of volunteers who differed in their language experience. Using fMRI, we compared deaf native signers of British Sign Language (BSL), who were also proficient speech-readers of English (i.e., two languages) with hearing people who could speechread English, but knew no BSL (i.e., one language). Both groups were presented with BSL signs and silently spoken English words, and were required to respond to a signed or spoken target. The interaction of group and condition revealed activation in the superior temporal cortex, bilaterally, focused in the posterior superior temporal gyri (pSTC, BA 42/22). In hearing people, these regions were activated more by speech than by sign, but in deaf respondents they showed similar levels of activation for both language forms - suggesting that posterior superior temporal regions are highly sensitive to language knowledge irrespective of the mode of delivery of the stimulus material. (C) 2009 Elsevier Inc. All rights reserved

    Hand and mouth: Cortical correlates of lexical processing in British sign language and speechreading english

    No full text
    Spoken languages use one set of articulators – the vocal tract, whereas signed languages use multiple articulators, including both manual and facial actions. How sensitive are the cortical circuits for language processing to the particular articulators that are observed? This question can only be addressed with participants who use both speech and a signed language. In this study, we used fMRI to compare the processing of speechreading and sign processing in deaf native signers of British Sign Language (BSL) who were also proficient speechreaders. The following questions were addressed: To what extent do these different language types rely on a common brain network? To what extent do the patterns of activation differ? How are these networks affected by the articulators that languages use? Common perisylvian regions were activated both for speechreading English words and for BSL signs. Distinctive activation was also observed reflecting the language form. Speechreading elicited greater activation in the left mid-superior temporal cortex than BSL, whereas BSL processing generated greater activation at the parieto-occipito-temporal junction in both hemispheres. We probed this distinction further within BSL, where manual signs can be accompanied by different sorts of mouth action. BSL signs with speech-like mouth actions showed greater superior temporal activation, while signs made with non-speech-like mouth actions showed more activation in posterior and inferior temporal regions. Distinct regions within the temporal cortex are not only differentially sensitive to perception of the distinctive articulators for speech and for sign, but also show sensitivity to the different articulators within the (signed) language
    corecore