10 research outputs found
The ASL-CDI 2.0: an updated, normed adaptation of the MacArthur Bates Communicative Development Inventory for American Sign Language
Vocabulary is a critical early marker of language development. The MacArthur Bates Communicative Development Inventory has been adapted to dozens of languages, and provides a bird’s-eye view of children’s early vocabularies which can be informative for both research and clinical purposes. We present an update to the American Sign Language Communicative Development Inventory (the ASL-CDI 2.0, https://www.aslcdi.org), a normed assessment of early ASL vocabulary that can be widely administered online by individuals with no formal training in sign language linguistics. The ASL-CDI 2.0 includes receptive and expressive vocabulary, and a Gestures and Phrases section; it also introduces an online interface that presents ASL signs as videos. We validated the ASL-CDI 2.0 with expressive and receptive in-person tasks administered to a subset of participants. The norming sample presented here consists of 120 deaf children (ages 9 to 73 months) with deaf parents. We present an analysis of the measurement properties of the ASL-CDI 2.0. Vocabulary increases with age, as expected. We see an early noun bias that shifts with age, and a lag between receptive and expressive vocabulary. We present these findings with indications for how the ASL-CDI 2.0 may be used in a range of clinical and research settingsAccepted manuscrip
Recommended from our members
The Source of Enhanced Cognitive Control in Bilinguals: Evidence From Bimodal Bilinguals
Bilinguals often outperform monolinguals on nonverbal tasks that require resolving conflict from competing alternatives. The regular need to select a target language is argued to enhance executive control. We investigated whether this enhancement stems from a general effect of bilingualism (the representation of two languages) or from a modality constraint that forces language selection. Bimodal bilinguals can, but do not always, sign and speak at the same time. Their two languages
involve distinct motor and perceptual systems, leading to weaker demands on language control. We compared the performance of 15 monolinguals, 15 bimodal bilinguals, and 15 unimodal bilinguals on a set of flanker tasks. There were no group differences in accuracy, but unimodal
bilinguals were faster than the other groups; bimodal bilinguals did not differ from monolinguals. These results trace the bilingual advantage in cognitive control to the unimodal bilingual’s experience controlling two languages in the same modality
Viewpoint in the Visual-Spatial Modality: The Coordination of Spatial Perspective
Sign languages express viewpoint-dependent spatial relations (e.g., left, right) iconically but must conventionalize from whose viewpoint the spatial relation is being described, the signer's or the perceiver's. In Experiment 1, ASL signers and sign-naĂŻve gesturers expressed viewpoint-dependent relations egocentrically, but only signers successfully interpreted the descriptions non-egocentrically, suggesting that viewpoint convergence in the visual modality emerges with language conventionalization. In Experiment 2, we observed that the cost of adopting a non-egocentric viewpoint was greater for producers than for perceivers, suggesting that sign languages have converged on the most cognitively efficient means of expressing left-right spatial relations. We suggest that non-linguistic cognitive factors such as visual perspective-taking and motor embodiment may constrain viewpoint convergence in the visual-spatial modality