828 research outputs found

    Evidence for a bimodal bilingual disadvantage in letter fluency

    Get PDF
    first published online 27 May 2016Many bimodal bilinguals are immersed in a spoken language-dominant environment from an early age and, unlike unimodal bilinguals, do not necessarily divide their language use between languages. Nonetheless, early ASL–English bilinguals retrieved fewer words in a letter fluency task in their dominant language compared to monolingual English speakers with equal vocabulary level. This finding demonstrates that reduced vocabulary size and/or frequency of use cannot completely account for bilingual disadvantages in verbal fluency. Instead, retrieval difficulties likely reflect between-language interference. Furthermore, it suggests that the two languages of bilinguals compete for selection even when they are expressed with distinct articulators.This research was supported by Rubicon grant 446-10-022 from the Netherlands Organization for Scientific Research to Marcel Giezen and NIH grant HD047736 to Karen Emmorey and SDSU

    Comparing Semantic Fluency in American Sign Language and English

    Get PDF
    Published: 04 May 2018This study investigated the impact of language modality and age of acquisition on semantic fluency in American Sign Language (ASL) and English. Experiment 1 compared semantic fluency performance (e.g., name as many animals as possible in one minute) for deaf native and early ASL signers and hearing monolingual English speakers. The results showed similar fluency scores in both modalities when fingerspelled responses were included for ASL. Experiment 2 compared ASL and English fluency scores in hearing native and late ASL-English bilinguals. Semantic fluency scores were higher in English (the dominant language) than ASL (the non-dominant language), regardless of age of ASL acquisition. Fingerspelling was relatively common in all groups of signers and was used primarily for low-frequency items. We conclude that semantic fluency is sensitive to language dominance and that performance can be compared across the spoken and signed modality, but fingerspelled responses should be included in ASL fluency scores.This work was supported by National Institutes of Health [HD047736 to K.E. and SDSU]

    Semantic Integration and Age of Acquisition Effects in Code-Blend Comprehension

    Get PDF
    Published: 10 December 2015Semantic and lexical decision tasks were used to investigate the mechanisms underlying code-blend facilitation: the finding that hearing bimodal bilinguals comprehend signs in American Sign Language (ASL) and spoken English words more quickly when they are presented together simultaneously than when each is presented alone. More robust facilitation effects were observed for semantic decision than for lexical decision, suggesting that lexical integration of signs and words within a code-blend occurs primarily at the semantic level, rather than at the level of form. Early bilinguals exhibited greater facilitation effects than late bilinguals for English (the dominant language) in the semantic decision task, possibly because early bilinguals are better able to process early visual cues from ASL signs and use these to constrain English word recognition. Comprehension facilitation via semantic integration of words and signs is consistent with co-speech gesture research demonstrating facilitative effects of gesture integration on language comprehension.The Netherlands Organization for Scientific Research (NWO Rubicon 446-10-022 to M.G.); the National Institutes of Health (HD047736 to K.E.)

    Functional Connectivity Reveals Which Language the “Control Regions” Control during Bilingual Production

    Get PDF
    Bilingual studies have revealed critical roles for the dorsal anterior cingulate cortex (dACC) and the left caudate nucleus (Lcaudate) in controlling language processing, but how these regions manage activation of a bilingual’s two languages remains an open question. We addressed this question by identifying the functional connectivity (FC) of these control regions during a picture-naming task by bimodal bilinguals who were fluent in both a spoken and a signed language. To quantify language control processes, we measured the FC of the dACC and Lcaudate with a region specific to each language modality: left superior temporal gyrus (LSTG) for speech and left pre/postcentral gyrus (LPCG) for sign. Picture-naming occurred in either a single- or dual-language context. The results showed that in a single-language context, the dACC exhibited increased FC with the target language region, but not with the non-target language region. During the dual-language context when both languages were alternately the target language, the dACC showed strong FC to the LPCG, the region specific to the less proficient (signed) language. By contrast, the Lcaudate revealed a strong connectivity to the LPCG in the single-language context and to the LSTG (the region specific to spoken language) in the dual-language context. Our findings suggest that the dACC monitors and supports the processing of the target language, and that the Lcaudate controls the selection of the less accessible language. The results support the hypothesis that language control processes adapt to task demands that vary due to different interactional contexts

    The relation between working memory and language comprehension in signers and speakers

    Get PDF
    Available online 5 May 2017This study investigated the relation between linguistic and spatial working memory (WM) resources and language comprehension for signed compared to spoken language. Sign languages are both linguistic and visualspatial, and therefore provide a unique window on modality-specific versus modality-independent contributions of WM resources to language processing. Deaf users of American Sign Language (ASL), hearing monolingual English speakers, and hearing ASL-English bilinguals completed several spatial and linguistic serial recall tasks. Additionally, their comprehension of spatial and non-spatial information in ASL and spoken English narratives was assessed. Results from the linguistic serial recall tasks revealed that the often reported advantage for speakers on linguistic short-term memory tasks does not extend to complex WM tasks with a serial recall component. For English, linguistic WM predicted retention of non-spatial information, and both linguistic and spatial WM predicted retention of spatial information. For ASL, spatial WM predicted retention of spatial (but not non-spatial) information, and linguistic WM did not predict retention of either spatial or non-spatial information. Overall, our findings argue against strong assumptions of independent domain-specific subsystems for the storage and processing of linguistic and spatial information and furthermore suggest a less important role for serial encoding in signed than spoken language comprehension.This research was supported by The National Institutes of Health grants DC010997 and HD047736 to Karen Emmorey and San Diego State University

    Language impairments in the development of sign: Do they reside in a specific modality or are they modality-independent deficits?

    Get PDF
    Various theories of developmental language impairments have sought to explain these impairments in modality-specific ways – for example, that the language deficits in SLI or Down syndrome arise from impairments in auditory processing. Studies of signers with language impairments, especially those who are bilingual in a spoken language as well as a sign language, provide a unique opportunity to contrast abilities across language in two modalities (cross-modal bilingualism). The aim of the article is to examine what developmental sign language impairments can tell us about the relationship between language impairments and modality. A series of individual and small group studies are presented here illustrating language impairments in sign language users and cross-modal bilinguals, comprising Landau-Kleffner syndrome, Williams syndrome, Down syndrome, Autism and SLI. We conclude by suggesting how studies of sign language impairments can assist researchers to explore how different language impairments originate from different parts of the cognitive, linguistic and perceptual systems

    The Perceived Mapping Between Form and Meaning in American Sign Language Depends on Linguistic Knowledge and Task: Evidence from Iconicity and Transparency Judgments

    Get PDF
    Iconicity is often defined as the resemblance between a form and a given meaning, while transparency is defined as the ability to infer a given meaning based on the form. This study examined the influence of knowledge of American Sign Language (ASL) on the perceived iconicity of signs and the relationship between iconicity, transparency (correctly guessed signs), ‘perceived transparency’ (transparency ratings of the guesses), and ‘semantic potential’ (the diversity (H index) of guesses). Experiment 1 compared iconicity ratings by deaf ASL signers and hearing non-signers for 991 signs from the ASL-LEX database. Signers and non-signers’ ratings were highly correlated; however, the groups provided different iconicity ratings for subclasses of signs: nouns vs. verbs, handling vs. entity, and one- vs. two-handed signs. In Experiment 2, non-signers guessed the meaning of 430 signs and rated them for how transparent their guessed meaning would be for others. Only 10% of guesses were correct. Iconicity ratings correlated with transparency (correct guesses), perceived transparency ratings, and semantic potential (H index). Further, some iconic signs were perceived as non-transparent and vice versa. The study demonstrates that linguistic knowledge mediates perceived iconicity distinctly from gesture and highlights critical distinctions between iconicity, transparency (perceived and objective), and semantic potential
    corecore