419 research outputs found

    Evaluating American Sign Language Generation Through the Participation of Native ASL Signers

    Get PDF
    We discuss important factors in the design of evaluation studies for systems that generate animations of American Sign Language (ASL) sentences. In particular, we outline how some cultural and linguistic characteristics of members of the American Deaf community must be taken into account so as to ensure the accuracy of evaluations involving these users. Finally, we describe our implementation and user-based evaluation (by native ASL signers) of a prototype ASL generator to produce sentences containing classifier predicates, frequent and complex spatial phenomena that previous ASL generators have not produced

    Best practices for conducting evaluations of sign language animation

    Get PDF
    Automatic synthesis of linguistically accurate and natural-looking American Sign Language (ASL) animations would make it easier to add ASL content to websites and media, thereby increasing information accessibility for many people who are deaf. Based on several years of studies, we identify best practices for conducting experimental evaluations of sign language animations with feedback from deaf and hard-of-hearing users. First, we describe our techniques for identifying and screening participants, and for controlling the experimental environment. Finally, we discuss rigorous methodological research on how experiment design affects study outcomes when evaluating sign language animations. Our discussion focuses on stimuli design, effect of using videos as an upper baseline, using videos for presenting comprehension questions, and eye-tracking as an alternative to recording question-responses

    Modeling the Speed and Timing of American Sign Language to Generate Realistic Animations

    Get PDF
    While there are many Deaf or Hard of Hearing (DHH) individuals with excellent reading literacy, there are also some DHH individuals who have lower English literacy. American Sign Language (ASL) is not simply a method of representing English sentences. It is possible for an individual to be fluent in ASL, while having limited fluency in English. To overcome this barrier, we aim to make it easier to generate ASL animations for websites, through the use of motion-capture data recorded from human signers to build different predictive models for ASL animations; our goal is to automate this aspect of animation synthesis to create realistic animations. This dissertation consists of several parts: Part I, defines key terminology for timing and speed parameters, and surveys literature on prior linguistic and computational research on ASL. Next, the motion-capture data that our lab recorded from human signers is discussed, and details are provided about how we enhanced this corpus to make it useful for speed and timing research. Finally, we present the process of adding layers of linguistic annotation and processing this data for speed and timing research. Part II presents our research on data-driven predictive models for various speed and timing parameters of ASL animations. The focus is on predicting the (1) existence of pauses after each ASL sign, (2) predicting the time duration of these pauses, and (3) predicting the change of speed for each ASL sign within a sentence. We measure the quality of the proposed models by comparing our models with state-of-the-art rule-based models. Furthermore, using these models, we synthesized ASL animation stimuli and conducted a user-based evaluation with DHH individuals to measure the usability of the resulting animation. Finally, Part III presents research on whether the timing parameters individuals prefer for animation may differ from those in recordings of human signers. Furthermore, it also includes research to investigate the distribution of acceleration curves in recordings of human signers and whether utilizing a similar set of curves in ASL animations leads to measurable improvements in DHH users\u27 perception of animation quality

    American sign language - Sentence reproduction test: Development & implications

    Get PDF
    The deaf community is widely heterogeneous in its language background. Widespread variation in fluency exists even among users of American Sign Language (ASL), the natural gestural language used by deaf people in North America. This variability is a source of unwanted noise in many psycholinguistic and pedagogical studies. Our aim is to develop a quantitative test of ASL fluency to allow researchers to measure and make use of this variability. We present a new test paradigm for assessing ASL fluency modeled after the Speaking Grammar Subtest of the Test of Adolescent and Adult Language, 3\u27d Edition (TOAL3; Hammill, Brown, Larsen, & Wiederholt, 1994). The American Sign Language-Sentence Reproduction Test (ASL-SRT) requires participants to watch computer-displayed video clips of a native signer signing sentences of increasing length and complexity. After viewing each sentence, the participant has to sign back the sentence just viewed. We review the development of appropriate test sentences, rating procedures and inter-rater reliability, and show how our preliminary version of the test already distinguishes between hearing and deaf users of ASL, as well as native and non-native users

    Heritage signers: language profile questionnaire

    Get PDF
    The instruction of American Sign Language historically has employed a foreign language pedagogy; however, research has shown foreign language teaching methods do not address the distinct pedagogical needs of heritage language learners. Framing deaf-parented individuals as heritage language learners capitalizes on the wealth of research on heritage speakers, particularly of Spanish. This study seeks to address three issues. First, it seeks to ascertain whether the assessment instrument developed successfully elicits pedagogically relevant data from deaf-parented individuals that frames them as heritage language learners of ASL. Second, it seeks to draw similarities between the experiences of deaf-parented individuals in the United States and heritage speakers of spoken languages such as Spanish. Third, after considering the first two, it addresses the question of whether deaf-parented individuals may therefore benefit from the pedagogical theory of heritage language learners. Using quantitative and qualitative methodologies, an assessment instrument was distributed to individuals over 18 years of age, who were raised by at least one deaf parent and had used and or understood signed language to any degree of fluency. This study seeks to test the soundness of the instrument’s design for use with the deaf-parented population. A review of participant responses and the literature highlights similarities in the experiences of heritage speakers and deaf-parented individuals, gesturing toward the strong possibility that deaf-parented individuals should be considered heritage language learners where ASL is concerned. The pedagogy used with deaf-parented individuals therefore should adapt the theories and practices used with heritage speakers

    Joining hands: developing a sign language machine translation system with and for the deaf community

    Get PDF
    This paper discusses the development of an automatic machine translation (MT) system for translating spoken language text into signed languages (SLs). The motivation for our work is the improvement of accessibility to airport information announcements for D/deaf and hard of hearing people. This paper demonstrates the involvement of Deaf colleagues and members of the D/deaf community in Ireland in three areas of our research: the choice of a domain for automatic translation that has a practical use for the D/deaf community; the human translation of English text into Irish Sign Language (ISL) as well as advice on ISL grammar and linguistics; and the importance of native ISL signers as manual evaluators of our translated output

    When is a Difference Really Different? Learners\u27 Discrimination of Linguistic Contrasts in American Sign Language

    Get PDF
    Learners’ ability to recognize linguistic contrasts in American Sign Language (ASL)was investigated using a paired-comparison discrimination task. Minimal pairs containing contrasts in five linguistic categories (i.e., the formational parameters of movement, handshape, orientation, and location in ASL phonology, and a category comprised of contrasts in complex morphology) were presented in sentence contexts to a sample of 127 hearing learners at beginning and intermediate levels of proficiency and 10 Deaf native signers. Participants’ responses were analyzed to determine the relative difficulty of the linguistic categories and the effect of proficiency level on performance. The results indicated that movement contrasts were the most difficult and location contrasts the easiest, with the other categories of stimuli of intermediate difficulty. These findings have implications for language learning in situations in which the first language is a spoken language and the second language (L2) is a signed language. In such situations, the construct of language transfer does not apply to the acquisition of L2 phonology because of fundamental differences between the phonological systems of signed and spoken languages, which are associated with differences between the modalities of speech and sign

    Data-driven Synthesis of Animations of Spatially Inflected American Sign Language Verbs Using Human Data

    Full text link
    Techniques for producing realistic and understandable animations of American Sign Language (ASL) have accessibility benefits for signers with lower levels of written language literacy. Previous research in sign language animation didn’t address the specific linguistic issue of space use and verb inflection, due to a lack of sufficiently detailed and linguistically annotated ASL corpora, which is necessary for modern data-driven approaches. In this dissertation, a high-quality ASL motion capture corpus with ASL-specific linguistic structures is collected, annotated, and evaluated using carefully designed protocols and well-calibrated motion capture equipment. In addition, ASL animations are modeled, synthesized, and evaluated based on samples of ASL signs collected from native-signer animators or from signers recorded using motion capture equipment. Part I of this dissertation focuses on how an ASL corpus is collected, including unscripted ASL passages and ASL inflecting verbs, signs in which the location and orientation of the hands is influenced by the arrangement of locations in 3D space that represent entities under discussion. Native signers are recorded in a studio with motion capture equipment: cyber-gloves, body suit, head tracker, hand tracker, and eye tracker. Part II describes how ASL animation is synthesized using our corpus of ASL inflecting verbs. Specifically, mathematical models of hand movement are trained on animation data of signs produced by a native signer. This dissertation work demonstrates that mathematical models can be trained and built using movement data collected from humans. The evaluation studies with deaf native signer participants show that the verb animations synthesized from our models have similar understandability in subjective-rating and comprehension-question scores to animations produced by a human animator, or to animations driven by a human’s motion capture data. The modeling techniques in this dissertation are applicable to other types of ASL signs and to other sign languages used internationally. These models’ parameterization of sign animations can increase the repertoire of generation systems and can automate the work of humans using sign language scripting systems

    The Contribution of Phonological Knowledge, Memory, and Language Background to Reading Comprehension in Deaf Populations

    Get PDF
    While reading is challenging for many deaf individuals, some become proficient readers. Little is known about the component processes that support reading comprehension in these individuals. Speech-based phonological knowledge is one of the strongest predictors of reading comprehension in hearing individuals, yet its role in deaf readers is controversial. This could reflect the highly varied language backgrounds among deaf readers as well as the difficulty of disentangling the relative contribution of phonological versus orthographic knowledge of spoken language, in our case ‘English,’ in this population. Here we assessed the impact of language experience on reading comprehension in deaf readers by recruiting oral deaf individuals, who use spoken English as their primary mode of communication, and deaf native signers of American Sign Language. First, to address the contribution of spoken English phonological knowledge in deaf readers, we present novel tasks that evaluate phonological versus orthographic knowledge. Second, the impact of this knowledge, as well as memory measures that rely differentially on phonological (serial recall) and semantic (free recall) processing, on reading comprehension was evaluated. The best predictor of reading comprehension differed as a function of language experience, with free recall being a better predictor in deaf native signers than in oral deaf. In contrast, the measures of English phonological knowledge, independent of orthographic knowledge, best predicted reading comprehension in oral deaf individuals. These results suggest successful reading strategies differ across deaf readers as a function of their language experience, and highlight a possible alternative route to literacy in deaf native signers
    • 

    corecore