351 research outputs found

    Viewing angle matters in British Sign Language processing

    Get PDF
    The impact of adverse listening conditions on spoken language perception is well established, but the role of suboptimal viewing conditions on signed language processing is less clear. Viewing angle, i.e. the physical orientation of a perceiver relative to a signer, varies in many everyday deaf community settings for L1 signers and may impact comprehension. Further, processing from various viewing angles may be more difficult for late L2 learners of a signed language, with less variation in sign input while learning. Using a semantic decision task in a distance priming paradigm, we show that British Sign Language signers are slower and less accurate to comprehend signs shown from side viewing angles, with L2 learners in particular making disproportionately more errors when viewing signs from side angles. We also investigated how individual differences in mental rotation ability modulate processing signs from different angles. Speed and accuracy on the BSL task correlated with mental rotation ability, suggesting that signers may mentally represent signs from a frontal view, and use mental rotation to process signs from other viewing angles. Our results extend the literature on viewpoint specificity in visual recognition to linguistic stimuli. The data suggests that L2 signed language learners should maximise their exposure to diverse signed language input, both in terms of viewing angle and other difficult viewing conditions to maximise comprehension.</p

    The neural basis of sign language processing in deaf signers: An activation likelihood estimation meta-analysis

    Get PDF
    The neurophysiological response during processing of sign language (SL) has been studied since the advent of Positron Emission Tomography (PET) and functional Magnetic Resonance Imaging (fMRI). Nevertheless, the neural substrates of SL remain subject to debate, especially with regard to involvement and relative lateralization of SL processing without production in (left) inferior frontal gyrus (IFG; e.g., Campbell, MacSweeney, & Waters, 2007; Emmorey, 2006, 2015). Our present contribution is the first to address these questions meta-analytically, by exploring functional convergence on the whole-brain level using previous fMRI and PET studies of SL processing in deaf signers. We screened 163 records in PubMed and Web of Science to identify studies of SL processing in deaf signers conducted with fMRI or PET that reported foci data for one of the two whole-brain contrasts: (1) “SL processing vs. control” or (2) “SL processing vs. low-level baseline”. This resulted in a total of 21 studies reporting 23 experiments matching our selection criteria. We manually extracted foci data and performed a coordinate-based Activation Likelihood Estimation (ALE) analysis using GingerALE (Eickhoff et al., 2009). Our selection criteria and the ALE method allow us to identify regions that are consistently involved in processing SL across studies and tasks. Our analysis reveals that processing of SL stimuli of varying linguistic complexity engages widely distributed bilateral fronto-occipito-temporal networks in deaf signers. We find significant clusters in both hemispheres, with the largest cluster (5240 mm3) being located in left IFG, spanning Broca’s region (posterior BA 45 and the dorsal portion of BA 44). Other clusters are located in right middle and inferior temporal gyrus (BA 37), right IFG (BA 45), left middle occipital gyrus (BA 19), right superior temporal gyrus (BA 22), left precentral and middle frontal gyrus (BA 6 and 8), as well as left insula (BA 13). On these clusters, we calculated lateralization indices using hemispheric and anatomical masks: SL comprehension is slightly left-lateralized globally, and strongly left-lateralized in Broca’s region. Sub-regionally, left-lateralization is strongest in BA 44 (Table 1). Next, we performed a contrast analysis between SL and an independent dataset of action observation in hearing non-signers (Papitto, Friederici, & Zaccarella, 2019) to determine which regions are associated with processing of human actions and movements irrespective of the presence of linguistic information. Only studies of observation of non-linguistic manual actions were included in the final set (n = 26), for example, excluding the handling of objects. Significant clusters involved in the linguistic aspects of SL comprehension were found in left Broca’s region (centered in dorsal BA 44), right superior temporal gyrus (BA 22), and left middle frontal and precentral gyrus (BA 6 and 8; Figure 1A, B, D and E). Meta-analytic connectivity modelling for the surviving cluster in Broca’s region using the BrainMap database then revealed that it is co-activated with the classical language network and functionally primarily associated with cognition and language processing (Figure 1C and D). In line with studies of spoken and written language processing (Zaccarella, Schell, & Friederici, 2017; Friederici, Chomsky, Berwick, Moro, & Bolhuis, 2017), our meta-analysis points to Broca’s region and especially left BA 44 as a hub in the language network that is involved in language processing independent of modality. Right IFG activity is not language-specific but may be specific to the visuo-gestural modality (Campbell et al., 2007). References Amunts, K., Schleicher, A., Bürgel, U., Mohlberg, H., Uylings, H. B., & Zilles, K. (1999). Broca’s region revisited: Cytoarchitecture and intersubject variability. The Journal of Comparative Neurology, 412(2), 319-341. Campbell, R., MacSweeney, M., & Waters, D. (2007). Sign language and the brain: A review. Journal of Deaf Studies and Deaf Education, 13(1), 3-20. doi: 10.1093/deafed/enm035 Eickhoff, S. B., Laird, A. R., Grefkes, C., Wang, L. E., Zilles, K., & Fox, P. T. (2009). Coordinate-based activation likelihood estimation meta-analysis of neuroimaging data: A random-effects approach based on empirical estimates of spatial uncertainty. Human Brain Mapping, 30(9), 2907-2926. doi: 10.1002/hbm.20718 Emmorey, K. (2006). The role of Broca’s area in sign language. In Y. Grodzinsky & K. Amunts (Eds.), Broca’s region (p. 169-184). Oxford, England: Oxford UP. Emmorey, K. (2015). The neurobiology of sign language. In A. W. Toga, P. Bandettini, P. Thompson, & K. Friston (Eds.), Brain mapping: An encyclopedic reference (Vol. 3, p. 475-479). London, England: Academic Press. doi: 10.1016/B978-0-12-397025-1.00272-4 Friederici, A. D., Chomsky, N., Berwick, R. C., Moro, A., & Bolhuis, J. J. (2017). Language, mind and brain. Nature Human Behaviour. doi: 10.1038/s41562-017-0184-4 Matsuo, K., Chen, S.-H. A., & Tseng, W.-Y. I. (2012). AveLI: A robust lateralization index in functional magnetic resonance imaging using unbiased threshold-free computation. Journal of Neuroscience Methods, 205(1), 119-129. doi: 10.1016/j.jneumeth.2011.12.020 Papitto, G., Friederici, A. D., & Zaccarella, E. (2019). A neuroanatomical comparison of action domains using Activation Likelihood Estimation meta-analysis [Unpublished Manuscript, Max Planck Institute for Human Cognitive & Brain Sciences]. Leipzig, Germany. Zaccarella, E., Schell, M., & Friederici, A. D. (2017). Reviewing the functional basis of the syntactic Merge mechanism for language: A coordinate-based activation likelihood estimation meta-analysis. Neuroscience & Biobehavioral Reviews, 80, 646-656. doi: 10.1016/j.neubiorev.2017.06.01

    FluentSigners-50: A signer independent benchmark dataset for sign language processing

    Get PDF
    This paper presents a new large-scale signer independent dataset for Kazakh-Russian Sign Language (KRSL) for the purposes of Sign Language Processing. We envision it to serve as a new benchmark dataset for performance evaluations of Continuous Sign Language Recognition (CSLR) and Translation (CSLT) tasks. The proposed FluentSigners-50 dataset consists of 173 sentences performed by 50 KRSL signers resulting in 43,250 video samples. Dataset contributors recorded videos in real-life settings on a wide variety of backgrounds using various devices such as smartphones and web cameras. Therefore, distance to the camera, camera angles and aspect ratio, video quality, and frame rates varied for each dataset contributor. Additionally, the proposed dataset contains a high degree of linguistic and inter-signer variability and thus is a better training set for recognizing a real-life sign language. FluentSigners-50 baseline is established using two state-of-the-art methods, Stochastic CSLR and TSPNet. To this end, we carefully prepared three benchmark train-test splits for models’ evaluations in terms of: signer independence, age independence, and unseen sentences. FluentSigners-50 is publicly available at https://krslproject.github.io/FluentSigners-50/publishedVersio

    SPOKEN AND SIGN LANGUAGE PROCESSING USING GRAMMATICALLY AUGMENTED ONTOLOGY

    Get PDF
    The mathematical model of grammatically augmented ontology was intro-duced to address this issue. This model was used for grammatical analysis of Ukrainian sentences. Domain specific language named GAODL for description of grammatically augmented ontology was developed. The grammar of the language was defined by means of Xtext extension for Eclipse. The developed language was used as an auxiliary part of the infor-mation technology for bidirectional Ukrainian sign language translation

    JWSign: A Highly Multilingual Corpus of Bible Translations for more Diversity in Sign Language Processing

    Full text link
    Advancements in sign language processing have been hindered by a lack of sufficient data, impeding progress in recognition, translation, and production tasks. The absence of comprehensive sign language datasets across the world's sign languages has widened the gap in this field, resulting in a few sign languages being studied more than others, making this research area extremely skewed mostly towards sign languages from high-income countries. In this work we introduce a new large and highly multilingual dataset for sign language translation: JWSign. The dataset consists of 2,530 hours of Bible translations in 98 sign languages, featuring more than 1,500 individual signers. On this dataset, we report neural machine translation experiments. Apart from bilingual baseline systems, we also train multilingual systems, including some that take into account the typological relatedness of signed or spoken languages. Our experiments highlight that multilingual systems are superior to bilingual baselines, and that in higher-resource scenarios, clustering language pairs that are related improves translation quality.Comment: EMNLP 20223 (Findings

    JWSign: A Highly Multilingual Corpus of Bible Translations for more Diversity in Sign Language Processing

    Get PDF
    Advancements in sign language processing have been hindered by a lack of sufficient data, impeding progress in recognition, translation, and production tasks. The absence of comprehensive sign language datasets across the world's sign languages has widened the gap in this field, resulting in a few sign languages being studied more than others, making this research area extremely skewed mostly towards sign languages from high-income countries. In this work we introduce a new large and highly multilingual dataset for sign language translation: JWSign. The dataset consists of 2,530 hours of Bible translations in 98 sign languages, featuring more than 1,500 individual signers. On this dataset, we report neural machine translation experiments. Apart from bilingual baseline systems, we also train multilingual systems, including some that take into account the typological relatedness of signed or spoken languages. Our experiments highlight that multilingual systems are superior to bilingual baselines, and that in higher-resource scenarios, clustering language pairs that are related improves translation quality

    Considerations for meaningful sign language machine translation based on glosses

    Full text link
    Automatic sign language processing is gaining popularity in Natural Language Processing (NLP) research (Yin et al., 2021). In machine translation (MT) in particular, sign language translation based on glosses is a prominent approach. In this paper, we review recent works on neural gloss translation. We find that limitations of glosses in general and limitations of specific datasets are not discussed in a transparent manner and that there is no common standard for evaluation. To address these issues, we put forward concrete recommendations for future research on gloss translation. Our suggestions advocate awareness of the inherent limitations of gloss-based approaches, realistic datasets, stronger baselines and convincing evaluation

    Considerations for meaningful sign language machine translation based on glosses

    Full text link
    Automatic sign language processing is gaining popularity in Natural Language Processing (NLP) research (Yin et al., 2021). In machine translation (MT) in particular, sign language translation based on glosses is a prominent approach. In this paper, we review recent works on neural gloss translation. We find that limitations of glosses in general and limitations of specific datasets are not discussed in a transparent manner and that there is no common standard for evaluation. To address these issues, we put forward concrete recommendations for future research on gloss translation. Our suggestions advocate awareness of the inherent limitations of gloss-based approaches, realistic datasets, stronger baselines and convincing evaluation

    Three event-related potential studies on phonological, morpho-syntactic, and semantic aspects

    Get PDF
    Sign languages have often been the subject of imaging studies investigating the underlying neural correlates of sign language processing. To the contrary, much less research has been conducted on the time-course of sign language processing. There are only a small number of event-related potential (ERP) studies that investigate semantic or morpho-syntactic anomalies in signed sentences. Due to specific properties of the manual-visual modality, sign languages differ from spoken languages in two respects: On the one hand, they are produced in a three-dimensional signing space, on the other hand, sign languages can use several (manual and nonmanual) articulators simul¬taneously. Thus, sign languages have modality-specific characteristics that have an impact on the way they are processed. This thesis presents three ERP studies on different linguistic aspects processed in German Sign Language (DGS) sentences. Chapter 1 investigates the hypothesis of a forward model perspec¬tive on prediction. In a semantic expectation mismatch design, deaf native signers saw videos with DGS sentences that ended in semantically expected or unexpected signs. Since sign languages entail relatively long transition phases between one sign and the next, we tested whether a prediction error of the upcoming sign is already detectable prior to the actual sign onset. Unexpected signs engendered an N400 previous to the critical sign onset that was thus elicited by properties of the transition phase. Chapter 2 presents a priming study on cross-modal cross-language co-activation. Deaf bimodal bilingual participants saw DGS sentences that contained prime-target pairs in one of two priming conditions. In overt phonological priming, prime and target signs were phonologically minimal pairs, while in covert orthographic priming, German translations of prime and target were orthographic minimal pairs, but there was no overlap between the signs. Target signs with overt phonological or with covert orthographic overlap engendered a reduced negativity in the electrophysiological signal. Thus, deaf bimodal bilinguals co-activate their second language (written) German unconsciously during processing sentences in their native sign language. Chapter 3 presents two ERP studies investigating the morpho-syntactic aspects of agreement in DGS. One study tested DGS sentences with incorrect, i.e. unspecified, agreement verbs, the other study tested DGS sentences with plain verbs that incorrectly inflected for 3rd person agreement. Agreement verbs that ended in an unspecified location engen¬dered two independent ERP effects: a positive deflection on posterior electrodes (220-570 ms relative to trigger nonmanual cues) and an anterior effect on left frontal electrodes (300-600 ms relative to the sign onset). In contrast, incorrect plain verbs resulted in a broadly distributed positive deflection (420-730 ms relative to the mismatch onset). These results contradict previous findings of agreement violation in sign languages and are discussed to reflect a violation of well-formedness or processes of context-updating. The stimulus materials of all four studies were consistently presented in continuously signed sentences presented in non-manipulated videos. This methodological innovation enabled a distinctive perspective on the time-course of sign language processing
    corecore