54 research outputs found

    Beyond Bilingual Programming: Interpreter Education in the U.S. Amidst Increasing Linguistic Diversity

    Get PDF
    The purpose of this study was to determine the current state of educational opportunities for college and university-level students who wish to incorporate Spanish into their study of ASL–English interpretation. The number of Spanish–English–ASL interpreters is growing at a rapid pace in the United States, and demand for such interpreters is notable—especially in video relay service settings (Quinto-Pozos, Alley, Casanova de Canales, & Treviño, 2015; Quinto-Pozos, Casanova de Canales, & Treviño, 2010). Unfortunately, there appear to be few educational programs that prepare students for such multilingual interpreting. The number of these programs is currently not known (in that information has not been reported in publications, on the Internet, or in social-media sources), and one goal of this research was to gather information about such programs and relevant trilingual content that interpreter educators may incorporate in their classes. This study offers a number of suggestions to interpreter education programs that enroll multilingual student interpreters

    Pronouns in ASL-English Simultaneous Interpretation

    Get PDF
    Pronominal systems across languages mark grammatical categories in different ways, and this can pose challenges for simultaneous interpretation. Pronouns can also be ambiguous, for example, by collapsing distinctions in some forms or by resembling demonstratives. We examine pronouns produced by a Deaf signer of American Sign Language (ASL) within a TEDx talk and how they are interpreted (simultaneously) by an ASL-English interpreter. Pronouns from both languages were coded and scrutinized for semantic correspondence across the two languages. Robust correspondences were found with some personal pronouns, especially first-person forms. However, mismatches across languages, in particular third-person forms and demonstratives, provide evidence of pitfalls for interpretation. In particular, we suggest that the ambiguous nature of some forms (e.g., third-person pronouns and singular demonstratives) can cause challenges for simultaneous interpretation across modalities

    Lexicalisation and de-lexicalisation processes in sign languages: Comparing depicting constructions and viewpoint gestures

    Get PDF
    In this paper, we compare so-called “classifier” constructions in signed languages (which we refer to as “depicting constructions”) with comparable iconic gestures produced by non-signers. We show clear correspondences between entity constructions and observer viewpoint gestures on the one hand, and handling constructions and character viewpoint gestures on the other. Such correspondences help account for both lexicalisation and de-lexicalisation processes in signed languages and how these processes are influenced by viewpoint. Understanding these processes is crucial when coding and annotating natural sign language data

    Disability does not negatively impact linguistic visual-spatial processing for hearing adult learners of a signed language

    Get PDF
    The majority of adult learners of a signed language are hearing and have little to no experience with a signed language. Thus, they must simultaneously learn a specific language and how to communicate within the visual-gestural modality. Past studies have examined modality-unique drivers of acquisition within first and second signed language learners. In the former group, atypically developing signers have provided a unique axis—namely, disability—for analyzing the intersection of language, modality, and cognition. Here, we extend the question of how cognitive disabilities affect signed language acquisition to a novel audience: hearing, second language (L2) learners of a signed language. We ask whether disability status influences the processing of spatial scenes (perspective taking) and short sentences (phonological contrasts), two aspects of the learning of a signed language. For the methodology, we conducted a secondary, exploratory analysis of a data set including college-level American Sign Language (ASL) students. Participants completed an ASL phonological- discrimination task as well as non-linguistic and linguistic (ASL) versions of a perspective-taking task. Accuracy and response time measures for the tests were compared between a disability group with self-reported diagnoses (e.g., ADHD, learning disability) and a neurotypical group with no self-reported diagnoses. The results revealed that the disability group collectively had lower accuracy compared to the neurotypical group only on the non-linguistic perspective-taking task. Moreover, the group of students who specifically identified as having a learning disability performed worse than students who self-reported using other categories of disabilities affecting cognition. We interpret these findings as demonstrating, crucially, that the signed modality itself does not generally disadvantage disabled and/or neurodiverse learners, even those who may exhibit challenges in visuospatial processing. We recommend that signed language instructors specifically support and monitor students labeled with learning disabilities to ensure development of visual-spatial skills and processing in signed language

    Ants, Tetramorium species E, learn to avoid predatory antlions' pit traps

    Get PDF
    Establishing and maintaining reference is a crucial part of discourse. In spoken languages, differential linguistic devices mark referents occurring in different referential contexts, that is, introduction, maintenance, and re-introduction contexts. Speakers using gestures as well as users of sign languages have also been shown to mark referents differentially depending on the referential context. This article investigates the modality-specific contribution of the visual modality in marking referential context by providing a direct comparison between sign language (German Sign Language; DGS) and co-speech gesture with speech (German) in elicited narratives. Across all forms of expression, we find that referents in subject position are referred to with more marking material in re-introduction contexts compared to maintenance contexts. Furthermore, we find that spatial modification is used as a modality-specific strategy in both DGS and German co-speech gesture, and that the configuration of referent locations in sign space and gesture space corresponds in an iconic and consistent way to the locations of referents in the narrated event. However, we find that spatial modification is used in different ways for marking re-introduction and maintenance contexts in DGS and German co-speech gesture. The findings are discussed in relation to the unique contribution of the visual modality to reference tracking in discourse when it is used in a unimodal system with full linguistic structure (i.e., as in sign) versus in a bimodal system that is a composite of speech and gestur

    Pantomime (Not Silent Gesture) in Multimodal Communication: Evidence From Children’s Narratives

    Get PDF
    Pantomime has long been considered distinct from co-speech gesture. It has therefore been argued that pantomime cannot be part of gesture-speech integration. We examine pantomime as distinct from silent gesture, focusing on non-co-speech gestures that occur in the midst of children’s spoken narratives. We propose that gestures with features of pantomime are an infrequent but meaningful component of a multimodal communicative strategy. We examined spontaneous non-co-speech representational gesture production in the narratives of 30 monolingual English-speaking children between the ages of 8- and 11-years. We compared the use of co-speech and non-co-speech gestures in both autobiographical and fictional narratives and examined viewpoint and the use of non-manual articulators, as well as the length of responses and narrative quality. The use of non-co-speech gestures was associated with longer narratives of equal or higher quality than those using only co-speech gestures. Non-co-speech gestures were most likely to adopt character-viewpoint and use non-manual articulators. The present study supports a deeper understanding of the term pantomime and its multimodal use by children in the integration of speech and gesture
    • …
    corecore