1,252 research outputs found

    COMPUTER ASSISTED COMMUNICATION FOR THE HEARING IMPAIRED FOR AN EMERGENCY ROOM SCENARIO

    Get PDF
    While there has been research on computerized communication facilities for those with hearing impairment, issues still remain. Current approaches utilize an avatar based approach which lacks the ability to adequately use facial expressions which are an integral aspect to the communication process in American Sign Language (ASL). Additionally, there is a lack of research into integrating a system to facilitate communication with the hearing impaired into a clinical environment, namely an emergency room admission scenario. This research aims to determine if an alternate approach of using videos created by humans in ASL can overcome the understandability barrier and still be usable in the communication process

    Data-driven machine translation for sign languages

    Get PDF
    This thesis explores the application of data-driven machine translation (MT) to sign languages (SLs). The provision of an SL MT system can facilitate communication between Deaf and hearing people by translating information into the native and preferred language of the individual. We begin with an introduction to SLs, focussing on Irish Sign Language - the native language of the Deaf in Ireland. We describe their linguistics and mechanics including similarities and differences with spoken languages. Given the lack of a formalised written form of these languages, an outline of annotation formats is discussed as well as the issue of data collection. We summarise previous approaches to SL MT, highlighting the pros and cons of each approach. Initial experiments in the novel area of example-based MT for SLs are discussed and an overview of the problems that arise when automatically translating these manual-visual languages is given. Following this we detail our data-driven approach, examining the MT system used and modifications made for the treatment of SLs and their annotation. Through sets of automatically evaluated experiments in both language directions, we consider the merits of data-driven MT for SLs and outline the mainstream evaluation metrics used. To complete the translation into SLs, we discuss the addition and manual evaluation of a signing avatar for real SL output

    Emotional Facial Expressions in Synthesised Sign Language Avatars: A Manual Evaluation

    Get PDF
    This research explores and evaluates the contribution that facial expressions might have regarding improved comprehension and acceptability in sign language avatars. Focusing specifically on Irish Sign Language (ISL), we examine the Deaf 1 community’s responsiveness to sign language avatars. The hypothesis of this is: Augmenting an existing avatar with the 7 widely accepted universal emotions identified by Ekman [1] to achieve underlying facial expressions, will make that avatar more human-like and improve usability and understandability for the ISL user. Using human evaluation methods [2] we compare an augmented set of avatar utterances against a baseline set, focusing on 2 key areas: comprehension and naturalness of facial configuration. We outline our approach to the evaluation including our choice of ISL participants, interview environment and evaluation methodology. The evaluation results reveal that in a comprehension test there was little difference between the baseline avatars and those augmented with emotional facial expression also we found that the avatars are lacking various linguistic attributes

    Emotional Facial Expressions in Synthesised Sign Language Avatars: a Manual Evaluation.

    Get PDF
    This research explores and evaluates the contribution that facial expressions might have regarding improved comprehension and acceptability in sign language avatars. Focusing specifically on Irish sign language (ISL), the Deaf (the uppercase ‘‘D’’ in the word ‘‘Deaf’’ indicates Deaf as a culture as opposed to ‘‘deaf’’ as a medical condition) community’s responsiveness to sign language avatars is examined. The hypothesis of this is as follows: augmenting an existing avatar with the seven widely accepted universal emotions identified by Ekman (Basic emotions: handbook of cognition and emotion. Wiley, London, 2005) to achieve underlying facial expressions will make that avatar more human-like and improve usability and understandability for the ISL user. Using human evaluation methods (Huenerfauth et al. in Trans Access Comput (ACM) 1:1, 2008), an augmented set of avatar utterances is compared against a baseline set, focusing on two key areas: comprehension and naturalness of facial configuration. The approach to the evaluation including the choice of ISL participants, interview environment, and evaluation methodology is then outlined. The evaluation results reveal that in a comprehension test there was little difference between the baseline avatars and those augmented with emotional facial expression. It was also found that the avatars are lacking various linguistic attributes

    Generating Co-occurring Facial Nonmanual Signals in Synthesized American Sign Language

    Get PDF
    Translating between English and American Sign Language (ASL) requires an avatar to display synthesized ASL. Essential to the language are nonmanual signals that appear on the face. In the past, these have posed a difficult challenge for signing avatars. Previous systems were hampered by an inability to portray simultaneously-occurring nonmanual signals on the face. This paper presents a method designed for supporting co-occurring nonmanual signals in ASL. Animations produced by the new system were tested with 40 members of the Deaf community in the United States. Participants identified all of the nonmanual signals even when they co-occurred. Co-occurring question nonmanuals and affect information were distinguishable, which is particularly striking because the two processes move an avatar’s brows in a competing manner. This breakthrough brings the state of the art one step closer to the goal of an automatic English-to-ASL translator. Conference proceedings from the International Conference on Computer Graphics Theory and Applications and International Conference on Information Visualization Theory and Applications, Barcelona, Spain, 21-24 February, 2013. Edited by Sabine Coquillart, Carlos AndĂșjar, Robert S. Laramee, Andreas Kerren, JosĂ© Braz. Barcelona, Spain. SciTePress 2013. 407-416

    Toward a Motor Theory of Sign Language Perception

    Get PDF
    Researches on signed languages still strongly dissociate lin- guistic issues related on phonological and phonetic aspects, and gesture studies for recognition and synthesis purposes. This paper focuses on the imbrication of motion and meaning for the analysis, synthesis and evaluation of sign language gestures. We discuss the relevance and interest of a motor theory of perception in sign language communication. According to this theory, we consider that linguistic knowledge is mapped on sensory-motor processes, and propose a methodology based on the principle of a synthesis-by-analysis approach, guided by an evaluation process that aims to validate some hypothesis and concepts of this theory. Examples from existing studies illustrate the di erent concepts and provide avenues for future work.Comment: 12 pages Partiellement financ\'e par le projet ANR SignCo

    The Role of Emotional and Facial Expression in Synthesised Sign Language Avatars

    Get PDF
    This thesis explores the role that underlying emotional facial expressions might have in regards to understandability in sign language avatars. Focusing specifically on Irish Sign Language (ISL), we examine the Deaf community’s requirement for a visual-gestural language as well as some linguistic attributes of ISL which we consider fundamental to this research. Unlike spoken language, visual-gestural languages such as ISL have no standard written representation. Given this, we compare current methods of written representation for signed languages as we consider: which, if any, is the most suitable transcription method for the medical receptionist dialogue corpus. A growing body of work is emerging from the field of sign language avatar synthesis. These works are now at a point where they can benefit greatly from introducing methods currently used in the field of humanoid animation and, more specifically, the application of morphs to represent facial expression. The hypothesis underpinning this research is: augmenting an existing avatar (eSIGN) with various combinations of the 7 widely accepted universal emotions identified by Ekman (1999) to deliver underlying facial expressions, will make that avatar more human-like. This research accepts as true that this is a factor in improving usability and understandability for ISL users. Using human evaluation methods (Huenerfauth, et al., 2008) the research compares an augmented set of avatar utterances against a baseline set with regards to 2 key areas: comprehension and naturalness of facial configuration. We outline our approach to the evaluation including our choice of ISL participants, interview environment, and evaluation methodology. Remarkably, the results of this manual evaluation show that there was very little difference between the comprehension scores of the baseline avatars and those augmented withEFEs. However, after comparing the comprehension results for the synthetic human avatar “Anna” against the caricature type avatar “Luna”, the synthetic human avatar Anna was the clear winner. The qualitative feedback allowed us an insight into why comprehension scores were not higher in each avatar and we feel that this feedback will be invaluable to the research community in the future development of sign language avatars. Other questions asked in the evaluation focused on sign language avatar technology in a more general manner. Significantly, participant feedback in regard to these questions indicates a rise in the level of literacy amongst Deaf adults as a result of mobile technology
    • 

    corecore