828 research outputs found
Evidence for a bimodal bilingual disadvantage in letter fluency
first published online 27 May 2016Many bimodal bilinguals are immersed in a spoken language-dominant environment from an early age and, unlike unimodal
bilinguals, do not necessarily divide their language use between languages. Nonetheless, early ASL–English bilinguals
retrieved fewer words in a letter fluency task in their dominant language compared to monolingual English speakers with
equal vocabulary level. This finding demonstrates that reduced vocabulary size and/or frequency of use cannot completely
account for bilingual disadvantages in verbal fluency. Instead, retrieval difficulties likely reflect between-language
interference. Furthermore, it suggests that the two languages of bilinguals compete for selection even when they are
expressed with distinct articulators.This research was supported by Rubicon grant 446-10-022 from the
Netherlands Organization for Scientific Research to Marcel Giezen
and NIH grant HD047736 to Karen Emmorey and SDSU
Comparing Semantic Fluency in American Sign Language and English
Published: 04 May 2018This study investigated the impact of language modality and age of acquisition on semantic fluency in American Sign Language (ASL) and English. Experiment 1 compared semantic fluency performance (e.g., name as many animals as possible in one minute) for deaf native and early ASL signers and hearing monolingual English speakers. The results showed similar fluency scores in both modalities when fingerspelled responses were included for ASL. Experiment 2 compared ASL and English fluency scores in hearing native and late ASL-English bilinguals. Semantic fluency scores were higher in English (the dominant language) than ASL (the non-dominant language), regardless of age of ASL acquisition. Fingerspelling was relatively common in all groups of signers and was used primarily for low-frequency items. We conclude that semantic fluency is sensitive to language dominance and that performance can be compared across the spoken and signed modality, but fingerspelled responses should be included in ASL fluency scores.This work was supported by National Institutes of Health [HD047736 to K.E. and SDSU]
Recommended from our members
The Source of Enhanced Cognitive Control in Bilinguals: Evidence From Bimodal Bilinguals
Bilinguals often outperform monolinguals on nonverbal tasks that require resolving conflict from competing alternatives. The regular need to select a target language is argued to enhance executive control. We investigated whether this enhancement stems from a general effect of bilingualism (the representation of two languages) or from a modality constraint that forces language selection. Bimodal bilinguals can, but do not always, sign and speak at the same time. Their two languages
involve distinct motor and perceptual systems, leading to weaker demands on language control. We compared the performance of 15 monolinguals, 15 bimodal bilinguals, and 15 unimodal bilinguals on a set of flanker tasks. There were no group differences in accuracy, but unimodal
bilinguals were faster than the other groups; bimodal bilinguals did not differ from monolinguals. These results trace the bilingual advantage in cognitive control to the unimodal bilingual’s experience controlling two languages in the same modality
Recommended from our members
The Influence of the Visual Modality on Language Structure and Conventionalization: Insights From Sign Language and Gesture
For humans, the ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression. Co-speech gestures, though non-linguistic, are produced in tight semantic and temporal integration with speech and constitute an integral part of language together with speech. The articles in this issue explore and document how gestures and sign languages are similar or different and how communicative expression in the visual modality can change from being gestural to grammatical in nature through processes of conventionalization. As such, this issue contributes to our understanding of how the visual modality shapes language and the emergence of linguistic structure in newly developing systems. Studying the relationship between signs and gestures provides a new window onto the human ability to recruit multiple levels of representation (e.g., categorical, gradient, iconic, abstract) in the service of using or creating conventionalized communicative systems
Semantic Integration and Age of Acquisition Effects in Code-Blend Comprehension
Published: 10 December 2015Semantic and lexical decision tasks were used to investigate the mechanisms underlying code-blend facilitation:
the finding that hearing bimodal bilinguals comprehend signs in American Sign Language (ASL) and spoken English
words more quickly when they are presented together simultaneously than when each is presented alone. More robust
facilitation effects were observed for semantic decision than for lexical decision, suggesting that lexical integration
of signs and words within a code-blend occurs primarily at the semantic level, rather than at the level of form. Early
bilinguals exhibited greater facilitation effects than late bilinguals for English (the dominant language) in the semantic
decision task, possibly because early bilinguals are better able to process early visual cues from ASL signs and use
these to constrain English word recognition. Comprehension facilitation via semantic integration of words and signs
is consistent with co-speech gesture research demonstrating facilitative effects of gesture integration on language
comprehension.The Netherlands Organization for Scientific Research (NWO Rubicon 446-10-022 to M.G.); the National Institutes of Health (HD047736 to K.E.)
Recommended from our members
Use of Spatial Communication in Aphasia
Background: Spatial communication consists of both verbal spatial language and gesture. There has been minimal research investigating the use of spatial communication, and even less focussing on people with aphasia.
Aims: The aims of this exploratory study were to describe the frequency and variability of spatial language and gesture use by three participants with aphasia in comparison to nine control participants. This included: 1) frequency of gestures; 2) types of gesture; 3) number of spatial descriptions described by gestures but no language; and 4) frequency and variety of locative prepositional, verb, and noun phrases.
Methods & Procedures: Each participant was videoed undertaking 11 spatial communication tasks: four description tasks, and seven tasks involving directing the researcher in the placement of objects or pictures. Gestures and language produced were transcribed and analysed.
Outcomes & Results: Participants with aphasia used significantly more gesture. Participants with aphasia also used more gesture without spoken phrases when spatial vocabulary was unavailable. Finally, there were differences between the participants with regards to the types of gesture that they used when they were unable to access language.
Conclusion & Implications: The results suggest that the analysis of gesture produced by people with aphasia may provide insight into their underlying language impairment. As this was an exploratory study, with just three participants with aphasia, further research is needed
Functional Connectivity Reveals Which Language the “Control Regions” Control during Bilingual Production
Bilingual studies have revealed critical roles for the dorsal anterior cingulate cortex (dACC) and the left caudate nucleus (Lcaudate) in controlling language processing, but how these regions manage activation of a bilingual’s two languages remains an open question. We addressed this question by identifying the functional connectivity (FC) of these control regions during a picture-naming task by bimodal bilinguals who were fluent in both a spoken and a signed language. To quantify language control processes, we measured the FC of the dACC and Lcaudate with a region specific to each language modality: left superior temporal gyrus (LSTG) for speech and left pre/postcentral gyrus (LPCG) for sign. Picture-naming occurred in either a single- or dual-language context. The results showed that in a single-language context, the dACC exhibited increased FC with the target language region, but not with the non-target language region. During the dual-language context when both languages were alternately the target language, the dACC showed strong FC to the LPCG, the region specific to the less proficient (signed) language. By contrast, the Lcaudate revealed a strong connectivity to the LPCG in the single-language context and to the LSTG (the region specific to spoken language) in the dual-language context. Our findings suggest that the dACC monitors and supports the processing of the target language, and that the Lcaudate controls the selection of the less accessible language. The results support the hypothesis that language control processes adapt to task demands that vary due to different interactional contexts
The relation between working memory and language comprehension in signers and speakers
Available online 5 May 2017This study investigated the relation between linguistic and spatial working memory (WM) resources and
language comprehension for signed compared to spoken language. Sign languages are both linguistic and visualspatial,
and therefore provide a unique window on modality-specific versus modality-independent contributions
of WM resources to language processing. Deaf users of American Sign Language (ASL), hearing monolingual
English speakers, and hearing ASL-English bilinguals completed several spatial and linguistic serial recall tasks.
Additionally, their comprehension of spatial and non-spatial information in ASL and spoken English narratives
was assessed. Results from the linguistic serial recall tasks revealed that the often reported advantage for
speakers on linguistic short-term memory tasks does not extend to complex WM tasks with a serial recall
component. For English, linguistic WM predicted retention of non-spatial information, and both linguistic and
spatial WM predicted retention of spatial information. For ASL, spatial WM predicted retention of spatial (but
not non-spatial) information, and linguistic WM did not predict retention of either spatial or non-spatial
information. Overall, our findings argue against strong assumptions of independent domain-specific subsystems
for the storage and processing of linguistic and spatial information and furthermore suggest a less important role
for serial encoding in signed than spoken language comprehension.This research was supported by The National Institutes of Health
grants DC010997 and HD047736 to Karen Emmorey and San Diego
State University
Language impairments in the development of sign: Do they reside in a specific modality or are they modality-independent deficits?
Various theories of developmental language impairments have sought to explain these impairments in modality-specific ways – for example, that the language deficits in SLI or Down syndrome arise from impairments in auditory processing. Studies of signers with language impairments, especially those who are bilingual in a spoken language as well as a sign language, provide a unique opportunity to contrast abilities across language in two modalities (cross-modal bilingualism). The aim of the article is to examine what developmental sign language impairments can tell us about the relationship between language impairments and modality. A series of individual and small group studies are presented here illustrating language impairments in sign language users and cross-modal bilinguals, comprising Landau-Kleffner syndrome, Williams syndrome, Down syndrome, Autism and SLI. We conclude by suggesting how studies of sign language impairments can assist researchers to explore how different language impairments originate from different parts of the cognitive, linguistic and perceptual systems
The Perceived Mapping Between Form and Meaning in American Sign Language Depends on Linguistic Knowledge and Task: Evidence from Iconicity and Transparency Judgments
Iconicity is often defined as the resemblance between a form and a given meaning, while transparency is defined as the ability to infer a given meaning based on the form. This study examined the influence of knowledge of American Sign Language (ASL) on the perceived iconicity of signs and the relationship between iconicity, transparency (correctly guessed signs), ‘perceived transparency’ (transparency ratings of the guesses), and ‘semantic potential’ (the diversity (H index) of guesses). Experiment 1 compared iconicity ratings by deaf ASL signers and hearing non-signers for 991 signs from the ASL-LEX database. Signers and non-signers’ ratings were highly correlated; however, the groups provided different iconicity ratings for subclasses of signs: nouns vs. verbs, handling vs. entity, and one- vs. two-handed signs. In Experiment 2, non-signers guessed the meaning of 430 signs and rated them for how transparent their guessed meaning would be for others. Only 10% of guesses were correct. Iconicity ratings correlated with transparency (correct guesses), perceived transparency ratings, and semantic potential (H index). Further, some iconic signs were perceived as non-transparent and vice versa. The study demonstrates that linguistic knowledge mediates perceived iconicity distinctly from gesture and highlights critical distinctions between iconicity, transparency (perceived and objective), and semantic potential
- …
