1,878 research outputs found

    An Avatar Based Natural Arabic Sign Language Generation System for Deaf People

    Get PDF
    Research demonstrates that individuals who are deaf are significantly aggrieved in the fields of education. A contributing factor to this difference is the difficulty deaf children have in acquiring learning concepts early in life. This paper will present an idea for highly interactive software using avatars (three-dimensional character modules) to process and translate free Arabic input to ARSL (ARabic Sign Language). A prototype for teaching maths and dictation for elementary schools will be discussed. This research could be valuable as a teaching tool in increasing: (1) the opportunity for deaf children to learn maths and dictation via interactive media; (2) the effectiveness of ARSL teachers. Keywords: Avatar, ARSL, Finger Spelling, Hand Shape

    Emotional engineering of artificial representations of sign languages

    Get PDF
    The fascination and challenge of making an appropriate digital representation of sign language for a highly specialised and culturally rich community such as the Deaf, has brought about the development and production of several digital representations of sign language (DRSL). These range from pictorial depictions of sign language, filmed video recordings to animated avatars (virtual humans). However, issues relating to translating and representing sign language in the digital-domain and the effectiveness of various approaches, has divided the opinion of the target audience. As a result there is still no universally accepted digital representation of sign language. For systems to reach their full potential, researchers have postulated that further investigation is needed into the interaction and representational issues associated with the mapping of sign language into the digital domain. This dissertation contributes a novel approach that investigates the comparative effectiveness of digital representations of sign language within different information delivery contexts. The empirical studies presented have supported the characterisation of the prescribed properties of DRSL's that make it an effective communication system, which when defined by the Deaf community, was often referred to as "emotion". This has led to and supported the developed of the proposed design methodology for the "Emotional Engineering of Artificial Sign Languages", which forms the main contribution of this thesis

    The Role of Emotional and Facial Expression in Synthesised Sign Language Avatars

    Get PDF
    This thesis explores the role that underlying emotional facial expressions might have in regards to understandability in sign language avatars. Focusing specifically on Irish Sign Language (ISL), we examine the Deaf community’s requirement for a visual-gestural language as well as some linguistic attributes of ISL which we consider fundamental to this research. Unlike spoken language, visual-gestural languages such as ISL have no standard written representation. Given this, we compare current methods of written representation for signed languages as we consider: which, if any, is the most suitable transcription method for the medical receptionist dialogue corpus. A growing body of work is emerging from the field of sign language avatar synthesis. These works are now at a point where they can benefit greatly from introducing methods currently used in the field of humanoid animation and, more specifically, the application of morphs to represent facial expression. The hypothesis underpinning this research is: augmenting an existing avatar (eSIGN) with various combinations of the 7 widely accepted universal emotions identified by Ekman (1999) to deliver underlying facial expressions, will make that avatar more human-like. This research accepts as true that this is a factor in improving usability and understandability for ISL users. Using human evaluation methods (Huenerfauth, et al., 2008) the research compares an augmented set of avatar utterances against a baseline set with regards to 2 key areas: comprehension and naturalness of facial configuration. We outline our approach to the evaluation including our choice of ISL participants, interview environment, and evaluation methodology. Remarkably, the results of this manual evaluation show that there was very little difference between the comprehension scores of the baseline avatars and those augmented withEFEs. However, after comparing the comprehension results for the synthetic human avatar “Anna” against the caricature type avatar “Luna”, the synthetic human avatar Anna was the clear winner. The qualitative feedback allowed us an insight into why comprehension scores were not higher in each avatar and we feel that this feedback will be invaluable to the research community in the future development of sign language avatars. Other questions asked in the evaluation focused on sign language avatar technology in a more general manner. Significantly, participant feedback in regard to these questions indicates a rise in the level of literacy amongst Deaf adults as a result of mobile technology

    Spanish Sign Language synthesis system

    Full text link
    This is the author’s version of a work that was accepted for publication in Journal of Visual Languages and Computing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Journal of Visual Languages and Computing,23, 3, (2012) DOI: 10.1016/j.jvlc.2012.01.003This work presents a new approach to the synthesis of Spanish Sign Language (LSE). Its main contributions are the use of a centralized relational database for storing sign descriptions, the proposal of a new input notation and a new avatar design, the skeleton structure of which improves the synthesis process. The relational database facilitates a highly detailed phonologic description of the signs that include parameter synchronization and timing. The centralized database approach has been introduced to allow the representation of each sign to be validated by the LSE National Institution, FCNSE. The input notation, designated HLSML, presents multiple levels of abstraction compared with current input notations. Redesigned input notation is used to simplify the description and the manual definition of LSE messages. Synthetic messages obtained using our approach have been evaluated by deaf users; in this evaluation a maximum recognition rate of 98.5% was obtained for isolated signs and a recognition rate of 95% was achieved for signed sentences

    Synthesizing mood-affected signed messages: Modifications to the parametric synthesis

    Full text link
    This is the author’s version of a work that was accepted for publication in International Journal of Human-Computer Studies. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in International Journal of Human-Computer Studies,70, 4 (2012) DOI: 10.1016/j.ijhcs.2011.11.003This paper describes the first approach in synthesizing mood-affected signed contents. The research focuses on the modifications applied to a parametric sign language synthesizer (based on phonetic descriptions of the signs). We propose some modifications that will allow for the synthesis of different perceived frames of mind within synthetic signed messages. Three of these proposals focus on modifications to three different signs' phonologic parameters (the hand shape, the movement and the non-hand parameter). The other two proposals focus on the temporal aspect of the synthesis (sign speed and transition duration) and the representation of muscular tension through inverse kinematics procedures. These resulting variations have been evaluated by Spanish deaf signers, who have concluded that our system can generate the same signed message with three different frames of mind, which are correctly identified by Spanish Sign Language signers

    Innovative Applications of Natural Language Processing and Digital Media in Theatre and Performing Arts

    Get PDF
    The objective of our research is to investigate new digital techniques and tools, offering the audience innovative, attractive, enhanced and accessible experiences. The project focuses on performing arts, particularly theatre, aiming at designing, implementing, experimenting and evaluating technologies and tools that expand the semiotic code of a performance by offering new opportunities and aesthetic means in stage art and by introducing parallel accessible narrative flows. In our novel paradigm, modern technologies emphasize the stage elements providing a multilevel, intense and immersive theatrical experience. Moreover, lighting, video projections, audio clips and digital characters are incorporated, bringing unique aesthetic features. We also attempt to remove sensory and language barriers faced by some audiences. Accessibility features consist of subtitles, sign language and audio description. The project emphasises on natural language processing technologies, embedded communication and multimodal interaction to monitor automatically the time flow of a performance. Based on this, pre-designed and directed stage elements are being mapped to appropriate parts of the script and activated automatically by using the virtual "world" and appropriate sensors, while accessibility flows are dynamically synchronized with the stage action. The tools above are currently adapted within two experimental theatrical plays for validation purposes. This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.</p

    Generating Co-occurring Facial Nonmanual Signals in Synthesized American Sign Language

    Get PDF
    Translating between English and American Sign Language (ASL) requires an avatar to display synthesized ASL. Essential to the language are nonmanual signals that appear on the face. In the past, these have posed a difficult challenge for signing avatars. Previous systems were hampered by an inability to portray simultaneously-occurring nonmanual signals on the face. This paper presents a method designed for supporting co-occurring nonmanual signals in ASL. Animations produced by the new system were tested with 40 members of the Deaf community in the United States. Participants identified all of the nonmanual signals even when they co-occurred. Co-occurring question nonmanuals and affect information were distinguishable, which is particularly striking because the two processes move an avatar’s brows in a competing manner. This breakthrough brings the state of the art one step closer to the goal of an automatic English-to-ASL translator. Conference proceedings from the International Conference on Computer Graphics Theory and Applications and International Conference on Information Visualization Theory and Applications, Barcelona, Spain, 21-24 February, 2013. Edited by Sabine Coquillart, Carlos Andújar, Robert S. Laramee, Andreas Kerren, José Braz. Barcelona, Spain. SciTePress 2013. 407-416
    • …
    corecore