2,945 research outputs found

    The Role of Emotional and Facial Expression in Synthesised Sign Language Avatars

    Get PDF
    This thesis explores the role that underlying emotional facial expressions might have in regards to understandability in sign language avatars. Focusing specifically on Irish Sign Language (ISL), we examine the Deaf community’s requirement for a visual-gestural language as well as some linguistic attributes of ISL which we consider fundamental to this research. Unlike spoken language, visual-gestural languages such as ISL have no standard written representation. Given this, we compare current methods of written representation for signed languages as we consider: which, if any, is the most suitable transcription method for the medical receptionist dialogue corpus. A growing body of work is emerging from the field of sign language avatar synthesis. These works are now at a point where they can benefit greatly from introducing methods currently used in the field of humanoid animation and, more specifically, the application of morphs to represent facial expression. The hypothesis underpinning this research is: augmenting an existing avatar (eSIGN) with various combinations of the 7 widely accepted universal emotions identified by Ekman (1999) to deliver underlying facial expressions, will make that avatar more human-like. This research accepts as true that this is a factor in improving usability and understandability for ISL users. Using human evaluation methods (Huenerfauth, et al., 2008) the research compares an augmented set of avatar utterances against a baseline set with regards to 2 key areas: comprehension and naturalness of facial configuration. We outline our approach to the evaluation including our choice of ISL participants, interview environment, and evaluation methodology. Remarkably, the results of this manual evaluation show that there was very little difference between the comprehension scores of the baseline avatars and those augmented withEFEs. However, after comparing the comprehension results for the synthetic human avatar “Anna” against the caricature type avatar “Luna”, the synthetic human avatar Anna was the clear winner. The qualitative feedback allowed us an insight into why comprehension scores were not higher in each avatar and we feel that this feedback will be invaluable to the research community in the future development of sign language avatars. Other questions asked in the evaluation focused on sign language avatar technology in a more general manner. Significantly, participant feedback in regard to these questions indicates a rise in the level of literacy amongst Deaf adults as a result of mobile technology

    A Systematic Mapping of Translation-Enabling Technologies for Sign Languages

    Get PDF
    Sign languages (SL) are the first language for most deaf people. Consequently, bidirectional communication among deaf and non-deaf people has always been a challenging issue. Sign language usage has increased due to inclusion policies and general public agreement, which must then become evident in information technologies, in the many facets that comprise sign language understanding and its computational treatment. In this study, we conduct a thorough systematic mapping of translation-enabling technologies for sign languages. This mapping has considered the most recommended guidelines for systematic reviews, i.e., those pertaining software engineering, since there is a need to account for interdisciplinary areas of accessibility, human computer interaction, natural language processing, and education, all of them part of ACM (Association for Computing Machinery) computing classification system directly related to software engineering. An ongoing development of a software tool called SYMPLE (SYstematic Mapping and Parallel Loading Engine) facilitated the querying and construction of a base set of candidate studies. A great diversity of topics has been studied over the last 25 years or so, but this systematic mapping allows for comfortable visualization of predominant areas, venues, top authors, and different measures of concentration and dispersion. The systematic review clearly shows a large number of classifications and subclassifications interspersed over time. This is an area of study in which there is much interest, with a basically steady level of scientific publications over the last decade, concentrated mainly in the European continent. The publications by country, nevertheless, usually favor their local sign language.The authors thank the School of Computing and the Computer Research Center of the Technological Institute of Costa Rica for the financial support, as well as CONICIT (Consejo Nacional para Investigaciones Científicas y Tecnológicas), Costa Rica, under grant 290-2006. This work was partly supported by the Spanish Ministry of Science, Innovation, and Universities through the Project ECLIPSE-UA under Grant RTI2018-094283-B-C32 and the Project INTEGER under Grant RTI2018-094649-B-I00, and partly by the Conselleria de Educación, Investigación, Cultura y Deporte of the Community of Valencia, Spain, within the Project PROMETEO/2018/089

    Spanish Sign Language synthesis system

    Full text link
    This is the author’s version of a work that was accepted for publication in Journal of Visual Languages and Computing. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Journal of Visual Languages and Computing,23, 3, (2012) DOI: 10.1016/j.jvlc.2012.01.003This work presents a new approach to the synthesis of Spanish Sign Language (LSE). Its main contributions are the use of a centralized relational database for storing sign descriptions, the proposal of a new input notation and a new avatar design, the skeleton structure of which improves the synthesis process. The relational database facilitates a highly detailed phonologic description of the signs that include parameter synchronization and timing. The centralized database approach has been introduced to allow the representation of each sign to be validated by the LSE National Institution, FCNSE. The input notation, designated HLSML, presents multiple levels of abstraction compared with current input notations. Redesigned input notation is used to simplify the description and the manual definition of LSE messages. Synthetic messages obtained using our approach have been evaluated by deaf users; in this evaluation a maximum recognition rate of 98.5% was obtained for isolated signs and a recognition rate of 95% was achieved for signed sentences

    Emotional engineering of artificial representations of sign languages

    Get PDF
    The fascination and challenge of making an appropriate digital representation of sign language for a highly specialised and culturally rich community such as the Deaf, has brought about the development and production of several digital representations of sign language (DRSL). These range from pictorial depictions of sign language, filmed video recordings to animated avatars (virtual humans). However, issues relating to translating and representing sign language in the digital-domain and the effectiveness of various approaches, has divided the opinion of the target audience. As a result there is still no universally accepted digital representation of sign language. For systems to reach their full potential, researchers have postulated that further investigation is needed into the interaction and representational issues associated with the mapping of sign language into the digital domain. This dissertation contributes a novel approach that investigates the comparative effectiveness of digital representations of sign language within different information delivery contexts. The empirical studies presented have supported the characterisation of the prescribed properties of DRSL's that make it an effective communication system, which when defined by the Deaf community, was often referred to as "emotion". This has led to and supported the developed of the proposed design methodology for the "Emotional Engineering of Artificial Sign Languages", which forms the main contribution of this thesis

    Accessible Learning Management Systems: Students’ Experiences and Insights

    Get PDF
    Learning Management System (LMS) is a type of an e-learning system is one of the main infrastructural requirements that improves access to higher education for persons with disabilities. The primary aim of the research study[1] was to explore perceptions of students with disabilities regarding the use and accessibility of learning management systems and benefits and/or barriers in e-learning. Students mainly have negative experiences while attempting to enter university web-sites/libraries/LMSs because of the inadequate adaptation to the specific needs of students with disabilities. In countries that do not have a developed LMS, the prevalent mean of communication with professors is via e-mail, in those where there is a LMS, there is not a fully accessibility of entire content and services for students with special needs. This research defined the need for creation of an accessible LMS or adjusted already existing LMS with accessibility solutions such as: a text-to-speech engine for blind students, a mode with sign language support for deaf students and a mode which supports dyslexic.Â

    Accessibility requirements for human-robot interaction for socially assistive robots

    Get PDF
    Mención Internacional en el título de doctorPrograma de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: María Ángeles Malfaz Vázquez.- Secretario: Diego Martín de Andrés.- Vocal: Mike Wal

    Emerging issues and current trends in assistive technology use 2007-1010: practising, assisting and enabling learning for all

    Get PDF
    Following an earlier review in 2007, a further review of the academic literature relating to the uses of assistive technology (AT) by children and young people was completed, covering the period 2007-2011. As in the earlier review, a tripartite taxonomy: technology uses to train or practise, technology uses to assist learning and technology uses to enable learning, was used in order to structure the findings. The key markers for research in this field and during these three years were user involvement, AT on mobile mainstream devices, the visibility of AT, technology for interaction and collaboration, new and developing interfaces and inclusive design principles. The paper concludes by locating these developments within the broader framework of the Digital Divide

    The synthesis of LSE classifiers: From representation to evaluation

    Full text link
    This work presents a first approach to the synthesis of Spanish Sign Language's (LSE) Classifier Constructions (CCs). All current attempts at the automatic synthesis of LSE simply create the animations corresponding to sequences of signs. This work, however, includes the synthesis of the LSE classification phenomena, defining more complex elements than simple signs, such as Classifier Predicates, Inflective CCs and Affixal classifiers. The intelligibility of our synthetic messages was evaluated by LSE natives, who reported a recognition rate of 93% correct answers
    corecore