631 research outputs found

    Synthesizing mood-affected signed messages: Modifications to the parametric synthesis

    Full text link
    This is the author’s version of a work that was accepted for publication in International Journal of Human-Computer Studies. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in International Journal of Human-Computer Studies,70, 4 (2012) DOI: 10.1016/j.ijhcs.2011.11.003This paper describes the first approach in synthesizing mood-affected signed contents. The research focuses on the modifications applied to a parametric sign language synthesizer (based on phonetic descriptions of the signs). We propose some modifications that will allow for the synthesis of different perceived frames of mind within synthetic signed messages. Three of these proposals focus on modifications to three different signs' phonologic parameters (the hand shape, the movement and the non-hand parameter). The other two proposals focus on the temporal aspect of the synthesis (sign speed and transition duration) and the representation of muscular tension through inverse kinematics procedures. These resulting variations have been evaluated by Spanish deaf signers, who have concluded that our system can generate the same signed message with three different frames of mind, which are correctly identified by Spanish Sign Language signers

    Generating realistic, animated human gestures in order to model, analyse and recognize Irish Sign Language

    Get PDF
    The aim of this thesis is to generate a gesture recognition system which can recognize several signs of Irish Sign Language (ISL). This project is divided into three parts. The first part provides background information on ISL. An overview of the ISL structure is a prerequisite to identifying and understanding the difficulties encountered in the development of a recognition system. The second part involves the generation of a data repository: synthetic and real-time video. Initially the synthetic data is created in a 3D animation package in order to simplify the creation of motion variations of the animated signer. The animation environment in our implementation allows for the generation of different versions of the same gesture with slight variations in the parameters of the motion. Secondly a database of ISL real-time video was created. This database contains 1400 different signs, including motion variation in each gesture. The third part details step by step my novel classification system and the associated prototype recognition system. The classification system is constructed as a decision tree to identify each sign uniquely. The recognition system is based on only one component of the classification system and has been implemented as a Hidden Markov Model (HMM)

    Building a sign language corpus for use in machine translation

    Get PDF
    In recent years data-driven methods of machine translation (MT) have overtaken rule-based approaches as the predominant means of automatically translating between languages. A pre-requisite for such an approach is a parallel corpus of the source and target languages. Technological developments in sign language (SL) capturing, analysis and processing tools now mean that SL corpora are becoming increasingly available. With transcription and language analysis tools being mainly designed and used for linguistic purposes, we describe the process of creating a multimedia parallel corpus specifically for the purposes of English to Irish Sign Language (ISL) MT. As part of our larger project on localisation, our research is focussed on developing assistive technology for patients with limited English in the domain of healthcare. Focussing on the first point of contact a patient has with a GP’s office, the medical secretary, we sought to develop a corpus from the dialogue between the two parties when scheduling an appointment. Throughout the development process we have created one parallel corpus in six different modalities from this initial dialogue. In this paper we discuss the multi-stage process of the development of this parallel corpus as individual and interdependent entities, both for our own MT purposes and their usefulness in the wider MT and SL research domains

    Hybrid paradigm for Spanish Sign Language synthesis

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/s10209-011-0245-9This work presents a hybrid approach to sign language synthesis. This approach allows the hand-tuning of the phonetic description of the signs, which focuses on the time aspect of the sign. Therefore, the approach retains the capacity for the performing of morpho-phonological operations, like notation-based approaches, and improves the synthetic signing performance, such as the hand-tuned animations approach. The proposed approach simplifies the input message description using a new high-level notation and storage of sign phonetic descriptions in a relational database. Such relational database allows for more flexible sign phonetic descriptions; it also allows for a description of sign timing and the synchronization between sign phonemes. The new notation, named HLSML, is a gloss-based notation focusing on message description in it. HLSML introduces several tags that allow for the modification of the signs in the message that defines dialect and mood variations, both of which are defined in the relational database, and message timing, including transition durations and pauses. A new avatar design is also proposed that simplifies the development of the synthesizer and avoids any interference with the independence of the sign language phonemes during animation. The obtained results showed an increase of the sign recognition rate compared to other approaches. This improvement was based on the active role that the sign language experts had in the description of signs, which was the result of the flexibility of the sign storage approach. The approach will simplify the description of synthesizable signed messages, thus facilitating the creation of multimedia-signed contents

    The synthesis of LSE classifiers: From representation to evaluation

    Full text link
    This work presents a first approach to the synthesis of Spanish Sign Language's (LSE) Classifier Constructions (CCs). All current attempts at the automatic synthesis of LSE simply create the animations corresponding to sequences of signs. This work, however, includes the synthesis of the LSE classification phenomena, defining more complex elements than simple signs, such as Classifier Predicates, Inflective CCs and Affixal classifiers. The intelligibility of our synthetic messages was evaluated by LSE natives, who reported a recognition rate of 93% correct answers

    The Role of Emotional and Facial Expression in Synthesised Sign Language Avatars

    Get PDF
    This thesis explores the role that underlying emotional facial expressions might have in regards to understandability in sign language avatars. Focusing specifically on Irish Sign Language (ISL), we examine the Deaf community’s requirement for a visual-gestural language as well as some linguistic attributes of ISL which we consider fundamental to this research. Unlike spoken language, visual-gestural languages such as ISL have no standard written representation. Given this, we compare current methods of written representation for signed languages as we consider: which, if any, is the most suitable transcription method for the medical receptionist dialogue corpus. A growing body of work is emerging from the field of sign language avatar synthesis. These works are now at a point where they can benefit greatly from introducing methods currently used in the field of humanoid animation and, more specifically, the application of morphs to represent facial expression. The hypothesis underpinning this research is: augmenting an existing avatar (eSIGN) with various combinations of the 7 widely accepted universal emotions identified by Ekman (1999) to deliver underlying facial expressions, will make that avatar more human-like. This research accepts as true that this is a factor in improving usability and understandability for ISL users. Using human evaluation methods (Huenerfauth, et al., 2008) the research compares an augmented set of avatar utterances against a baseline set with regards to 2 key areas: comprehension and naturalness of facial configuration. We outline our approach to the evaluation including our choice of ISL participants, interview environment, and evaluation methodology. Remarkably, the results of this manual evaluation show that there was very little difference between the comprehension scores of the baseline avatars and those augmented withEFEs. However, after comparing the comprehension results for the synthetic human avatar “Anna” against the caricature type avatar “Luna”, the synthetic human avatar Anna was the clear winner. The qualitative feedback allowed us an insight into why comprehension scores were not higher in each avatar and we feel that this feedback will be invaluable to the research community in the future development of sign language avatars. Other questions asked in the evaluation focused on sign language avatar technology in a more general manner. Significantly, participant feedback in regard to these questions indicates a rise in the level of literacy amongst Deaf adults as a result of mobile technology

    Multimedia Dictionary and Synthesis of Sign Language

    Get PDF
    We developed a multimedia dictionary of the Slovenian Sign Language (SSL) which consists of words, illustrations and video clips. We describe the structure of the dictionary and give examples of its user interface. Based on our sign language dictionary, we developed a method of synthesizing the sign language by intelligent joining of video clips, which makes possible a translation of written texts or, in connection with a speech recognition system, of spoken words to the sign language

    Toward a Motor Theory of Sign Language Perception

    Get PDF
    Researches on signed languages still strongly dissociate lin- guistic issues related on phonological and phonetic aspects, and gesture studies for recognition and synthesis purposes. This paper focuses on the imbrication of motion and meaning for the analysis, synthesis and evaluation of sign language gestures. We discuss the relevance and interest of a motor theory of perception in sign language communication. According to this theory, we consider that linguistic knowledge is mapped on sensory-motor processes, and propose a methodology based on the principle of a synthesis-by-analysis approach, guided by an evaluation process that aims to validate some hypothesis and concepts of this theory. Examples from existing studies illustrate the di erent concepts and provide avenues for future work.Comment: 12 pages Partiellement financ\'e par le projet ANR SignCo
    • 

    corecore