313 research outputs found
Monitoring different phonological parameters of sign language engages the same cortical language network but distinctive perceptual ones
The study of signed languages allows the dissociation of sensorimotor and cognitive neural components of the language signal. Here we investigated the neurocognitive processes underlying the monitoring of two phonological parameters of sign languages: handshape and location. Our goal was to determine if brain regions processing sensorimotor characteristics of different phonological parameters of sign languages were also involved in phonological processing, with their activity being modulated by the linguistic content of manual actions. We conducted an fMRI experiment using manual actions varying in phonological structure and semantics: (1) signs of a familiar sign language (British Sign Language), (2) signs of an unfamiliar sign language (Swedish Sign Language), and (3) invented nonsigns that violate the phonological rules of British Sign Language and Swedish Sign Language or consist of nonoccurring combinations of phonological parameters. Three groups of participants were tested: deaf native signers, deaf nonsigners, and hearing nonsigners. Results show that the linguistic processing of different phonological parameters of sign language is independent of the sensorimotor characteristics of the language signal. Handshape and location were processed by different perceptual and task-related brain networks but recruited the same language areas. The semantic content of the stimuli did not influence this process, but phonological structure did, with nonsigns being associated with longer RTs and stronger activations in an action observation network in all participants and in the supramarginal gyrus exclusively in deaf signers. These results suggest higher processing demands for stimuli that contravene the phonological rules of a signed language, independently of previous knowledge of signed languages. We suggest that the phonological characteristics of a language may arise as a consequence of more efficient neural processing for its perception and production
Recommended from our members
Segmentation of British Sign Language (BSL): Mind the gap!
This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous signing, there are salient transitions between sign locations. We used the sign-spotting task to ask if and how BSL signers use these transitions in segmentation. A total of 96 real BSL signs were preceded by nonsense signs which were produced in either the target location or another location (with a small or large transition). Half of the transitions were within the same major body area (e.g., head) and half were across body areas (e.g., chest to hand). Deaf adult BSL users (a group of natives and early learners, and a group of late learners) spotted target signs best when there was a minimal transition and worst when there was a large transition. When location changes were present, both groups performed better when transitions were to a different body area than when they were within the same area. These findings suggest that transitions do not provide explicit sign-boundary cues in a modality-specific fashion. Instead, we argue that smaller transitions help recognition in a modality-general way by limiting lexical search to signs within location neighbourhoods, and that transitions across body areas also aid segmentation in a modality-general way, by providing a phonotactic cue to a sign boundary. We propose that sign segmentation is based on modality-general procedures which are core language-processing mechanisms
The acquisition of Sign Language: The impact of phonetic complexity on phonology
Research into the effect of phonetic complexity on phonological acquisition has a long history in spoken languages. This paper considers the effect of phonetics on phonological development in a signed language. We report on an experiment in which nonword-repetition methodology was adapted so as to examine in a systematic way how phonetic complexity in two phonological parameters of signed languages — handshape and movement — affects the perception and articulation of signs. Ninety-one Deaf children aged 3–11 acquiring British Sign Language (BSL) and 46 hearing nonsigners aged 6–11 repeated a set of 40 nonsense signs. For Deaf children, repetition accuracy improved with age, correlated with wider BSL abilities, and was lowest for signs that were phonetically complex. Repetition accuracy was correlated with fine motor skills for the youngest children. Despite their lower repetition accuracy, the hearing group were similarly affected by phonetic complexity, suggesting that common visual and motoric factors are at play when processing linguistic information in the visuo-gestural modality
Three event-related potential studies on phonological, morpho-syntactic, and semantic aspects
Sign languages have often been the subject of imaging studies investigating the underlying neural correlates of sign language processing. To the contrary, much less research has been conducted on the time-course of sign language processing. There are only a small number of event-related potential (ERP) studies that investigate semantic or morpho-syntactic anomalies in signed sentences. Due to specific properties of the manual-visual modality, sign languages differ from spoken languages in two respects: On the one hand, they are produced in a three-dimensional signing space, on the other hand, sign languages can use several (manual and nonmanual) articulators simul¬taneously. Thus, sign languages have modality-specific characteristics that have an impact on the way they are processed. This thesis presents three ERP studies on different linguistic aspects processed in German Sign Language (DGS) sentences. Chapter 1 investigates the hypothesis of a forward model perspec¬tive on prediction. In a semantic expectation mismatch design, deaf native signers saw videos with DGS sentences that ended in semantically expected or unexpected signs. Since sign languages entail relatively long transition phases between one sign and the next, we tested whether a prediction error of the upcoming sign is already detectable prior to the actual sign onset. Unexpected signs engendered an N400 previous to the critical sign onset that was thus elicited by properties of the transition phase. Chapter 2 presents a priming study on cross-modal cross-language co-activation. Deaf bimodal bilingual participants saw DGS sentences that contained prime-target pairs in one of two priming conditions. In overt phonological priming, prime and target signs were phonologically minimal pairs, while in covert orthographic priming, German translations of prime and target were orthographic minimal pairs, but there was no overlap between the signs. Target signs with overt phonological or with covert orthographic overlap engendered a reduced negativity in the electrophysiological signal. Thus, deaf bimodal bilinguals co-activate their second language (written) German unconsciously during processing sentences in their native sign language. Chapter 3 presents two ERP studies investigating the morpho-syntactic aspects of agreement in DGS. One study tested DGS sentences with incorrect, i.e. unspecified, agreement verbs, the other study tested DGS sentences with plain verbs that incorrectly inflected for 3rd person agreement. Agreement verbs that ended in an unspecified location engen¬dered two independent ERP effects: a positive deflection on posterior electrodes (220-570 ms relative to trigger nonmanual cues) and an anterior effect on left frontal electrodes (300-600 ms relative to the sign onset). In contrast, incorrect plain verbs resulted in a broadly distributed positive deflection (420-730 ms relative to the mismatch onset). These results contradict previous findings of agreement violation in sign languages and are discussed to reflect a violation of well-formedness or processes of context-updating. The stimulus materials of all four studies were consistently presented in continuously signed sentences presented in non-manipulated videos. This methodological innovation enabled a distinctive perspective on the time-course of sign language processing
Comparing Iconicity Trade-Offs in Cena and Libras during a Sign Language Production Task
Although classifier constructions generally aim for highly iconic depictions, like any other part of language they may be constrained by phonology. We compare utterances containing motion events between signers of Cena, an emerging rural sign language in Brazil, and Libras, the national sign language of Brazil, to investigate whether a difference in time-depth—a relevant factor in phonological reorganisation—influences trade-offs involving iconicity. First, we find that contrary to what may be expected, given that emerging sign languages exhibit great variation and favour highly iconic prototypes, Cena signers exhibit neither greater variation nor the use of more complex handshapes in classifier constructions. We also report a divergence from findings on Nicaraguan Sign Language (NSL) in how signers encode movement in a young language, showing that Cena signers tend to encode manner and path simultaneously, unlike NSL signers of comparable cohorts. Cena signers therefore pattern more like non-signing gesturers and signers of urban sign languages, including the Libras signers in our study. The study contributes an addition to the as-yet limited investigations into classifiers in emerging sign languages, demonstrating how different aspects of linguistic organisation, including phonology, can interact with classifier form
Translating a Portuguese poem in LIBRAS. Linguistic considerations and form-focused tasks.
Abstract
The teacher of deaf children in primary education is called to apply sign bilingualism in his/her teaching, and hence to use sign language - such as LIBRAS - as the first language during in-class time, and as a school subject. This again means that all other subjects - among them Portuguese - need to be taught in SL. In fact, Portuguese is taught as the second language of deaf children. In such educational setting, the teacher needs to develop learning materials for LIBRAS. Current research lacks recording such practices, although, unofficially, it is common knowledge that teachers translate existing school materials that have been developed for Portuguese and for hearing pupils in primary education. In this paper, a LIBRAS translation is presented of the poem “As abelhas” by VinĂcius de Moraes, with the scope to demonstrate its linguistic use for the teaching of LIBRAS as a first language. Apart from its target vocabulary items, form-focused tasks are demonstrated, indicating their implementation for the development of deaf children’s receptive and productive skills. In doing so, the poem is presented following the A-level descriptors (A1, A2) of the Common European Framework of Reference for Sign Languages
- …