35 research outputs found

    How supramodal is the language network? The view from sign language

    Get PDF
    One of the major insights of modern linguistics has been that the human capacity for language is not bound to speech but may also be externalized and perceived in the visuo-spatial modality of sign language. Neuroimaging evidence indicates that signed, spoken and, written language is processed in a partially overlapping primarily left-hemispheric fronto-temporal network (Trettenbrein et al., 2021, Human Brain Mapping). Against this background, this talk will review to what extent and on what grounds anatomical and functional components of the language network can or should reasonably be considered supramodal

    Neuroscience and syntax

    Get PDF

    The neural basis of sign language processing in deaf signers: An activation likelihood estimation meta-analysis

    Get PDF
    The neurophysiological response during processing of sign language (SL) has been studied since the advent of Positron Emission Tomography (PET) and functional Magnetic Resonance Imaging (fMRI). Nevertheless, the neural substrates of SL remain subject to debate, especially with regard to involvement and relative lateralization of SL processing without production in (left) inferior frontal gyrus (IFG; e.g., Campbell, MacSweeney, & Waters, 2007; Emmorey, 2006, 2015). Our present contribution is the first to address these questions meta-analytically, by exploring functional convergence on the whole-brain level using previous fMRI and PET studies of SL processing in deaf signers. We screened 163 records in PubMed and Web of Science to identify studies of SL processing in deaf signers conducted with fMRI or PET that reported foci data for one of the two whole-brain contrasts: (1) “SL processing vs. control” or (2) “SL processing vs. low-level baseline”. This resulted in a total of 21 studies reporting 23 experiments matching our selection criteria. We manually extracted foci data and performed a coordinate-based Activation Likelihood Estimation (ALE) analysis using GingerALE (Eickhoff et al., 2009). Our selection criteria and the ALE method allow us to identify regions that are consistently involved in processing SL across studies and tasks. Our analysis reveals that processing of SL stimuli of varying linguistic complexity engages widely distributed bilateral fronto-occipito-temporal networks in deaf signers. We find significant clusters in both hemispheres, with the largest cluster (5240 mm3) being located in left IFG, spanning Broca’s region (posterior BA 45 and the dorsal portion of BA 44). Other clusters are located in right middle and inferior temporal gyrus (BA 37), right IFG (BA 45), left middle occipital gyrus (BA 19), right superior temporal gyrus (BA 22), left precentral and middle frontal gyrus (BA 6 and 8), as well as left insula (BA 13). On these clusters, we calculated lateralization indices using hemispheric and anatomical masks: SL comprehension is slightly left-lateralized globally, and strongly left-lateralized in Broca’s region. Sub-regionally, left-lateralization is strongest in BA 44 (Table 1). Next, we performed a contrast analysis between SL and an independent dataset of action observation in hearing non-signers (Papitto, Friederici, & Zaccarella, 2019) to determine which regions are associated with processing of human actions and movements irrespective of the presence of linguistic information. Only studies of observation of non-linguistic manual actions were included in the final set (n = 26), for example, excluding the handling of objects. Significant clusters involved in the linguistic aspects of SL comprehension were found in left Broca’s region (centered in dorsal BA 44), right superior temporal gyrus (BA 22), and left middle frontal and precentral gyrus (BA 6 and 8; Figure 1A, B, D and E). Meta-analytic connectivity modelling for the surviving cluster in Broca’s region using the BrainMap database then revealed that it is co-activated with the classical language network and functionally primarily associated with cognition and language processing (Figure 1C and D). In line with studies of spoken and written language processing (Zaccarella, Schell, & Friederici, 2017; Friederici, Chomsky, Berwick, Moro, & Bolhuis, 2017), our meta-analysis points to Broca’s region and especially left BA 44 as a hub in the language network that is involved in language processing independent of modality. Right IFG activity is not language-specific but may be specific to the visuo-gestural modality (Campbell et al., 2007). References Amunts, K., Schleicher, A., Bürgel, U., Mohlberg, H., Uylings, H. B., & Zilles, K. (1999). Broca’s region revisited: Cytoarchitecture and intersubject variability. The Journal of Comparative Neurology, 412(2), 319-341. Campbell, R., MacSweeney, M., & Waters, D. (2007). Sign language and the brain: A review. Journal of Deaf Studies and Deaf Education, 13(1), 3-20. doi: 10.1093/deafed/enm035 Eickhoff, S. B., Laird, A. R., Grefkes, C., Wang, L. E., Zilles, K., & Fox, P. T. (2009). Coordinate-based activation likelihood estimation meta-analysis of neuroimaging data: A random-effects approach based on empirical estimates of spatial uncertainty. Human Brain Mapping, 30(9), 2907-2926. doi: 10.1002/hbm.20718 Emmorey, K. (2006). The role of Broca’s area in sign language. In Y. Grodzinsky & K. Amunts (Eds.), Broca’s region (p. 169-184). Oxford, England: Oxford UP. Emmorey, K. (2015). The neurobiology of sign language. In A. W. Toga, P. Bandettini, P. Thompson, & K. Friston (Eds.), Brain mapping: An encyclopedic reference (Vol. 3, p. 475-479). London, England: Academic Press. doi: 10.1016/B978-0-12-397025-1.00272-4 Friederici, A. D., Chomsky, N., Berwick, R. C., Moro, A., & Bolhuis, J. J. (2017). Language, mind and brain. Nature Human Behaviour. doi: 10.1038/s41562-017-0184-4 Matsuo, K., Chen, S.-H. A., & Tseng, W.-Y. I. (2012). AveLI: A robust lateralization index in functional magnetic resonance imaging using unbiased threshold-free computation. Journal of Neuroscience Methods, 205(1), 119-129. doi: 10.1016/j.jneumeth.2011.12.020 Papitto, G., Friederici, A. D., & Zaccarella, E. (2019). A neuroanatomical comparison of action domains using Activation Likelihood Estimation meta-analysis [Unpublished Manuscript, Max Planck Institute for Human Cognitive & Brain Sciences]. Leipzig, Germany. Zaccarella, E., Schell, M., & Friederici, A. D. (2017). Reviewing the functional basis of the syntactic Merge mechanism for language: A coordinate-based activation likelihood estimation meta-analysis. Neuroscience & Biobehavioral Reviews, 80, 646-656. doi: 10.1016/j.neubiorev.2017.06.01

    Sobre el estudio biológico del lenguaje 50 años después. Una conversación con Noam Chomsky

    Get PDF
    Sobre el estudio biológico del lenguaje 50 años después. Una conversación con Noam Chomsky (Translation of "50 years later: A conversation about the biological study of language with Noam Chomsky" which originally appeared in Biolinguistics 11.SI: 487–499 in 2017)

    Controlling video stimuli in sign language and gesture research: The OpenPoseR package for analyzing OpenPose motion tracking data in R

    Get PDF
    Researchers in the fields of sign language and gesture studies frequently present their participants with video stimuli showing actors performing linguistic signs or co-speech gestures. Up to now, such video stimuli have been mostly controlled only for some of the technical aspects of the video material (e.g., duration of clips, encoding, framerate, etc.), leaving open the possibility that systematic differences in video stimulus materials may be concealed in the actual motion properties of the actor’s movements. Computer vision methods such as OpenPose enable the fitting of body-pose models to the consecutive frames of a video clip and thereby make it possible to recover the movements performed by the actor in a particular video clip without the use of a point-based or markerless motion-tracking system during recording. The OpenPoseR package provides a straightforward and reproducible way of working with these body-pose model data extracted from video clips using OpenPose, allowing researchers in the fields of sign language and gesture studies to quantify the amount of motion (velocity and acceleration) pertaining only to the movements performed by the actor in a video clip. These quantitative measures can be used for controlling differences in the movements of an actor in stimulus video clips or, for example, between different conditions of an experiment. In addition, the package also provides a set of functions for generating plots for data visualization, as well as an easy-to-use way of automatically extracting metadata (e.g., duration, framerate, etc.) from large sets of video files

    Sprache ist mehr als Sprechen: Eine kognitionswissenschaftliche Betrachtung

    Get PDF
    So unterschiedlich die mehr als 7000 Sprachen der Welt auf den ersten Blick erscheinen, so sehr vereint sie eines: Alle folgen grammatikalischen Regeln, die Wörter zu Sätzen zusammensetzen. Egal, ob gesprochen, geschrieben oder gebärdet wird. Woher kommt diese Vielfalt an Sprachen? Und warum ist unsere Sprach­fähigkeit nicht an eine bestimme Form gebunden? Eine kognitions­­wissen­­schaft­liche Betrachtung

    Biolinguistics end-of-year notice 2022

    Get PDF

    Psycholinguistic norms for more than 300 lexical signs in German Sign Language (DGS)

    Get PDF
    Sign language offers a unique perspective on the human faculty of language by illustrating that linguistic abilities are not bound to speech and writing. In studies of spoken and written language processing, lexical variables such as, for example, age of acquisition have been found to play an important role, but such information is not as yet available for German Sign Language (Deutsche Gebärdensprache, DGS). Here, we present a set of norms for frequency, age of acquisition, and iconicity for more than 300 lexical DGS signs, derived from subjective ratings by 32 deaf signers. We also provide additional norms for iconicity and transparency for the same set of signs derived from ratings by 30 hearing non-signers. In addition to empirical norming data, the dataset includes machine-readable information about a sign’s correspondence in German and English, as well as annotations of lexico-semantic and phonological properties: one-handed vs. two-handed, place of articulation, most likely lexical class, animacy, verb type, (potential) homonymy, and potential dialectal variation. Finally, we include information about sign onset and offset for all stimulus clips from automated motion-tracking data. All norms, stimulus clips, data, as well as code used for analysis are made available through the Open Science Framework in the hope that they may prove to be useful to other researchers: https://osf.io/mz8j4

    Psycholinguistic norms for more than 300 lexical manual signs in German Sign Language (DGS)

    No full text
    Sign languages provide researchers with an opportunity to ask empirical questions about the human language faculty that go beyond considerations specific to speech and writing. Whereas psycholinguists working with spoken and written language stimuli routinely control their materials for parameters such as lexical frequency and age of acquisition (AoA), no such information or normed stimulus sets are currently available to researchers working with German Sign Language (DGS). Our contribution presents the first norms for iconicity, familiarity, AoA, and transparency for DGS. The normed stimulus set consists of more than 300 clips of manual DGS signs accom- panied by mouthings and non-manual components. Norms for the signs in the clips are derived from ratings by a total of 30 deaf signers in Leipzig, Göttingen, and Hamburg, as well as 30 hearing non-signers and native speakers of German in Leipzig. The rating procedure was implemented in a browser to ensure functionality and a similar procedure across locations and participants (Figure 1a), yet all participants performed the ratings on site in the presence of an experimenter. Deaf signers performed a total of three tasks in which they rated stimulus clips for iconicity, AoA, and familiarity. Such subjective measures of AoA and familiarity have been shown to be good proxies for corpus measures in studies of other spoken and sign languages (Vinson, Cormier, Denmark, Schembri, & Vigliocco, 2008). Hearing non-signers performed two tasks in which they first guessed the meaning of the signs in the clips to determine transparency and in the second task rated iconicity given the meaning. In addition to empirical norming data (e.g., Figure 1b), we provide information about German and English correspondences of signs. The stimulus set has been annotated in machine-readable form with regard to lexico-semantic as well as phonological properties of signs: one-handed vs. two-handed, place of articulation, path movement, symmetry, most likely lexical class, animacy, verb type, (potential) homonymy, and potential dialectal variation. Information about sign on- and offset for all stimulus clips and a number of quantitative measures of movement are also available. These were derived from automated motion tracking by fitting a pose-estimation model (Figure 1c) to the clips using OpenPose (Wei, Ramakrishna, Kanade, & Sheikh, 2016) which allows us to quantify and automatically track movement (velocity and acceleration) beyond annotation (Figure 1d). In this presentation, we will focus on providing an overview of the derived norms and attempt to put them in perspective of published empirical norms for other sign languages, for example, ASL and BSL (Vinson et al., 2008; Caselli, Sehyr, Cohen-Goldberg, & Emmorey, 2017), as well as comparable information for spoken languages. This includes a comparison of our subjective rating data with regard to frequency and AoA obtained using DGS signs with norms for other sign languages as well as with similar measures for German and English. We also discuss the relationship of mean iconicity ratings between deaf signers and hearing non-signers, as well as the relation of iconicity and transparency. Our norms and stimulus set are intended to control for psychologically relevant param- eters in future psycho- and neurolinguistic studies of DGS beyond the work of our own labs. Consequently, the norms, stimulus clips, cleaned raw data, and the R scripts used for analysis will be made available for download through the Open Science Framework. References Caselli, N. K., Sehyr, Z. S., Cohen-Goldberg, A. M., & Emmorey, K. (2017). ASL-LEX: A lexical database of American Sign Language. Behavior Research Methods, 49(2), 784-801. doi: 10.3758/ s13428-016-0742-0 Vinson, D. P., Cormier, K., Denmark, T., Schembri, A., & Vigliocco, G. (2008). The British Sign Language (BSL) norms for age of acquisition, familiarity, and iconicity. Behavior Research Methods, 40(4), 1079-1087. doi: 10.3758/BRM.40.4.1079 Wei, S.-E., Ramakrishna, V., Kanade, T., & Sheikh, Y. (2016). Convolutional pose machines. arXiv:1602.00134 [cs]

    A meta-analytic perspective on data sharing and reproducibility in cognitive neuroscience of sign language

    No full text
    The acquisition of most neuroimaging data and especially magnetic resonance imaging (MRI) data is laborious and cost-intensive. Nevertheless, the practice of data sharing is not commonplace in the field of cognitive neuroscience of sign language. We have recently completed the first ever meta-analysis of the neuroimaging literature on sign language to identify brain regions consistently involved in processing of sign language across studies and paradigms (Trettenbrein et al., forthcoming), in the course of which we encountered a variety of obstacles relating to data sharing and reproducibility. This presentation will recapitulate the issues we faced when carrying out our meta-analysis. Against this background, we will discuss how adopting Open Science practices and infrastructure for data sharing and reproducibility that are already established in neuroimaging at large may benefit future (meta-analytic) work. We end by sketching how Open Science ideas have been implemented in our own ongoing work on sign language
    corecore