267 research outputs found

    The Role of Emotional and Facial Expression in Synthesised Sign Language Avatars

    Get PDF
    This thesis explores the role that underlying emotional facial expressions might have in regards to understandability in sign language avatars. Focusing specifically on Irish Sign Language (ISL), we examine the Deaf community’s requirement for a visual-gestural language as well as some linguistic attributes of ISL which we consider fundamental to this research. Unlike spoken language, visual-gestural languages such as ISL have no standard written representation. Given this, we compare current methods of written representation for signed languages as we consider: which, if any, is the most suitable transcription method for the medical receptionist dialogue corpus. A growing body of work is emerging from the field of sign language avatar synthesis. These works are now at a point where they can benefit greatly from introducing methods currently used in the field of humanoid animation and, more specifically, the application of morphs to represent facial expression. The hypothesis underpinning this research is: augmenting an existing avatar (eSIGN) with various combinations of the 7 widely accepted universal emotions identified by Ekman (1999) to deliver underlying facial expressions, will make that avatar more human-like. This research accepts as true that this is a factor in improving usability and understandability for ISL users. Using human evaluation methods (Huenerfauth, et al., 2008) the research compares an augmented set of avatar utterances against a baseline set with regards to 2 key areas: comprehension and naturalness of facial configuration. We outline our approach to the evaluation including our choice of ISL participants, interview environment, and evaluation methodology. Remarkably, the results of this manual evaluation show that there was very little difference between the comprehension scores of the baseline avatars and those augmented withEFEs. However, after comparing the comprehension results for the synthetic human avatar “Anna” against the caricature type avatar “Luna”, the synthetic human avatar Anna was the clear winner. The qualitative feedback allowed us an insight into why comprehension scores were not higher in each avatar and we feel that this feedback will be invaluable to the research community in the future development of sign language avatars. Other questions asked in the evaluation focused on sign language avatar technology in a more general manner. Significantly, participant feedback in regard to these questions indicates a rise in the level of literacy amongst Deaf adults as a result of mobile technology

    Utilization of Avatar-based Technology in The Area of Sign language... A Review

    Get PDF
    Information and communication technology (ICT) has progressed rapidly in recent years, and it is becoming necessary for everybody including deafpeople. This paper gives an overview of using a technology called Avatar-based technology in the area of sign language, which is the normal language of the deafworldwide, although it is different from country to another. This paper covers the basic concepts related to the signing avatar and the efforts for applying it indifferent sign language worldwide, especially Arabic Sign Language (ArSL)

    Automated Technique for Real-Time Production of Lifelike Animations of American Sign Language

    Get PDF
    Generating sentences from a library of signs implemented through a sparse set of key frames derived from the segmental structure of a phonetic model of ASL has the advantage of flexibility and efficiency, but lacks the lifelike detail of motion capture. These difficulties are compounded when faced with real-time generation and display. This paper describes a technique for automatically adding realism without the expense of manually animating the requisite detail. The new technique layers transparently over and modifies the primary motions dictated by the segmental model, and does so with very little computational cost, enabling real-time production and display. The paper also discusses avatar optimizations that can lower the rendering overhead in real-time displays

    Interactive Editing in French Sign Language Dedicated to Virtual Signers: Requirements and Challenges

    Get PDF
    International audienceSigning avatars are increasingly used as an interface for communication to the deaf community. In recent years, an emerging approach uses captured data to edit and generate sign language (SL) gestures. Thanks to motion editing operations (e.g., concatenation, mixing), this method offers the possibility to compose new utterances, thus facilitating the enrichment of the original corpus, enhancing the natural look of the animation, and promoting the avatar’s acceptability. However, designing such an editing system raises many questions. In particular, manipulating existing movements does not guarantee the semantic consistency of the reconstructed actions. A solution is to insert the human operator in a loop for constructing new utterances and to incorporate within the utterance’s structure constraints that are derived from linguistic patterns. This article discusses the main requirements for the whole pipeline design of interactive virtual signers, including: (1) the creation of corpora, (2) the needed resources for motion recording, (3) the annotation process as the heart of the SL editing process, (4) the building, indexing, and querying of a motion database, (5) the virtual avatar animation by editing and composing motion segments, and (6) the conception of a dedicated user interface according to user’ knowledge and abilities. Each step is illustrated by the authors’ recent work and results from the project Sign3D, i.e., an editing system of French Sign Language (LSF) content

    Reconstructing Signing Avatars from Video Using Linguistic Priors

    Get PDF
    Sign language (SL) is the primary method of communication for the 70 million Deaf people around the world. Video dictionaries of isolated signs are a core SL learning tool. Replacing these with 3D avatars can aid learning and enable AR/VR applications, improving access to technology and online media. However, little work has attempted to estimate expressive 3D avatars from SL video; occlusion, noise, and motion blur make this task difficult. We address this by introducing novel linguistic priors that are universally applicable to SL and provide constraints on 3D hand pose that help resolve ambiguities within isolated signs. Our method, SGNify, captures fine-grained hand pose, facial expression, and body movement fully automatically from in-the-wild monocular SL videos. We evaluate SGNify quantitatively by using a commercial motion-capture system to compute 3D avatars synchronized with monocular video. SGNify outperforms state-of-the-art 3D body-pose- and shape-estimation methods on SL videos. A perceptual study shows that SGNify's 3D reconstructions are significantly more comprehensible and natural than those of previous methods and are on par with the source videos. Code and data are available at sgnify.is.tue.mpg.de

    The design of a generic signing avatar animation system

    Get PDF
    Thesis (MScIng)--University of Stellenbosch, 2006.ENGLISH ABSTRACT: We designed a generic avatar animator for use in sign language related projects. The animator is capable of animating any given avatar that is compliant with the H-Anim standard for humanoid animation. The system was designed with the South African Sign Language Machine Translation (SASL-MT) project in mind, but can easily be adapted to other sign language projects due to its generic design. An avatar that is capable of accurately performing sign language gestures is a special kind of avatar and is referred to as a signing avatar. In this thesis we investigate the special characteristics of signing avatars and address the issue of finding a generic design for the animation of such an avatar.AFRIKAANSE OPSOMMING: Ons het ’n generiese karakteranimasiestelsel ontwikkel vir gebruik in gebaretaal verwante projekte. Die animasiestelsel het die vermo¨e om enige karaktermodel wat met die H-Anim standaard versoenbaar is, te animeer. Die animasiestelsel is ontwerp met die oog op gebruik in die South African Sign Language Machine Translation (SASL-MT) projek, maar kan maklik aangepas word vir ander gebaretaalprojekte te danke aan die generiese ontwerp. ’n Karaktermodel wat in staat is om gebare akkuraat te maak is ’n spesiale tipe karaktermodel wat bekend staan as ’n gebaretaal avatar (Engels : signing avatar). In hierdie tesis ondersoek ons die spesiale eienskappe van ’n gebaretaal avatar en beskou die soektog na ’n generiese ontwerp vir die animering van so ’n karaktermodel

    Generating Co-occurring Facial Nonmanual Signals in Synthesized American Sign Language

    Get PDF
    Translating between English and American Sign Language (ASL) requires an avatar to display synthesized ASL. Essential to the language are nonmanual signals that appear on the face. In the past, these have posed a difficult challenge for signing avatars. Previous systems were hampered by an inability to portray simultaneously-occurring nonmanual signals on the face. This paper presents a method designed for supporting co-occurring nonmanual signals in ASL. Animations produced by the new system were tested with 40 members of the Deaf community in the United States. Participants identified all of the nonmanual signals even when they co-occurred. Co-occurring question nonmanuals and affect information were distinguishable, which is particularly striking because the two processes move an avatar’s brows in a competing manner. This breakthrough brings the state of the art one step closer to the goal of an automatic English-to-ASL translator. Conference proceedings from the International Conference on Computer Graphics Theory and Applications and International Conference on Information Visualization Theory and Applications, Barcelona, Spain, 21-24 February, 2013. Edited by Sabine Coquillart, Carlos Andújar, Robert S. Laramee, Andreas Kerren, José Braz. Barcelona, Spain. SciTePress 2013. 407-416
    corecore