727 research outputs found

    Emotional engineering of artificial representations of sign languages

    Get PDF
    The fascination and challenge of making an appropriate digital representation of sign language for a highly specialised and culturally rich community such as the Deaf, has brought about the development and production of several digital representations of sign language (DRSL). These range from pictorial depictions of sign language, filmed video recordings to animated avatars (virtual humans). However, issues relating to translating and representing sign language in the digital-domain and the effectiveness of various approaches, has divided the opinion of the target audience. As a result there is still no universally accepted digital representation of sign language. For systems to reach their full potential, researchers have postulated that further investigation is needed into the interaction and representational issues associated with the mapping of sign language into the digital domain. This dissertation contributes a novel approach that investigates the comparative effectiveness of digital representations of sign language within different information delivery contexts. The empirical studies presented have supported the characterisation of the prescribed properties of DRSL's that make it an effective communication system, which when defined by the Deaf community, was often referred to as "emotion". This has led to and supported the developed of the proposed design methodology for the "Emotional Engineering of Artificial Sign Languages", which forms the main contribution of this thesis

    Eyetracking Metrics Related to Subjective Assessments of ASL Animations

    Get PDF
    Analysis of eyetracking data can serve as an alternative method of evaluation when assessing the quality of computer-synthesized animations of American Sign Language (ASL), technology which can make information accessible to people who are deaf or hard-of-hearing, who may have lower levels of written language literacy. In this work, we build and evaluate the efficacy of descriptive models of subjective scores that native signers assign to ASL animations, based on eye-tracking metrics

    Adversarial Training for Multi-Channel Sign Language Production

    Full text link
    Sign Languages are rich multi-channel languages, requiring articulation of both manual (hands) and non-manual (face and body) features in a precise, intricate manner. Sign Language Production (SLP), the automatic translation from spoken to sign languages, must embody this full sign morphology to be truly understandable by the Deaf community. Previous work has mainly focused on manual feature production, with an under-articulated output caused by regression to the mean. In this paper, we propose an Adversarial Multi-Channel approach to SLP. We frame sign production as a minimax game between a transformer-based Generator and a conditional Discriminator. Our adversarial discriminator evaluates the realism of sign production conditioned on the source text, pushing the generator towards a realistic and articulate output. Additionally, we fully encapsulate sign articulators with the inclusion of non-manual features, producing facial features and mouthing patterns. We evaluate on the challenging RWTH-PHOENIX-Weather-2014T (PHOENIX14T) dataset, and report state-of-the art SLP back-translation performance for manual production. We set new benchmarks for the production of multi-channel sign to underpin future research into realistic SLP

    Best practices for conducting evaluations of sign language animation

    Get PDF
    Automatic synthesis of linguistically accurate and natural-looking American Sign Language (ASL) animations would make it easier to add ASL content to websites and media, thereby increasing information accessibility for many people who are deaf. Based on several years of studies, we identify best practices for conducting experimental evaluations of sign language animations with feedback from deaf and hard-of-hearing users. First, we describe our techniques for identifying and screening participants, and for controlling the experimental environment. Finally, we discuss rigorous methodological research on how experiment design affects study outcomes when evaluating sign language animations. Our discussion focuses on stimuli design, effect of using videos as an upper baseline, using videos for presenting comprehension questions, and eye-tracking as an alternative to recording question-responses

    Assessing a 3D digital Prototype for Teaching the Brazilian Sign Language Alphabet: an Alternative for Non-programming Designers

    Get PDF
    This study aims to analyse the users’ perceptions about a 3D digital artifacts prototype for teaching the fingerspelling alphabet of Brazilian Sign Language (LIBRAS). For this purpose, a high-fidelity prototype was developed with a non-programming method, and a usability test was conducted using a structured questionnaire with 31 participants, including Deaf and hearing people. Most users (96.7%) rated the learning experience with the tool as positive, with 67.7% rating the experience as "good", 12.9% as "very good", and 16.1% as "excellent". Comparing the evaluation between Deaf and hearing people showed that both target groups mostly rated it positively. However, most hearing people rated it "good," while the majority of the Deaf rated it as "excellent" (29%) or "outstanding" (14%) compared to 13% and 12%, respectively, among the hearing. In summary, considering the variables presented, the experience was well rated and did not encounter solid obstacles or resistance

    Haasteet viittomakielen konekäännösjärjestelmien kehityksessä

    Get PDF
    Maailmalta löytyy noin 300 erilaista viittomakieltä, joilla on yhteensä noin 70 miljoonaa käyttäjää. Suomessa suomalainen tai suomenruotsalainen viittomakieli on noin 3100 kuuron henkilön äidinkieli. Viitotuilla kielillä on erityinen asema osana kuurojen kulttuuria, ja viittomakieliset muodostavatkin sekä kielellisen että kulttuurillisen vähemmistön. Tietokonesovellusten käyttöliittymät, hakukoneet ja kääntäjät eivät useimmiten tue viittomakieltä, mikä aiheuttaa esteitä esimerkiksi tiedon saavutettavuuteen viittomakielisille. Kehitys konenäön ja -oppimisen alalla on vähentänyt esteitä viittomakielen konekääntämisen kehityksessä ja luonut mahdollisuuksia sitä hyödyntävien järjestelmien luomiseen. Tässä kandidaatintutkielmassa tarkastelen tutkimuskirjallisuuden avulla viittomakielen koneellisen käsittelyn haasteita automaattisen käännöksen ja sen sovelluskohteiden eri osa-alueilla. Toteutin tutkielman kirjallisuuskatsauksena, johon valitsin aineistoksi ajankohtaisia tutkimuksia, kirjallisuusanalyysejä ja kuuroyhteisön tuottamaa materiaalia kuten lausuntoja. Valitsin tutkielmaan mukaan aineistoja, jotka käsittelevät aihetta eri näkökulmista, esimerkiksi teknologian, käyttäjän tai kuuroyhteisön kannalta. Tunnistan tutkielmassa useita haasteita teknologian suunnittelussa ja kehityksessä. Merkittäviä haasteita näissä kategorioissa ovat esimerkiksi viittomakielen koneellisen tuottamisen korkeat vaatimukset teknologialle, huono ymmärrys käyttäjien tarpeista ja vaatimuksista sekä kuuroyhteisön vähäinen osallisuus projekteissa. Myös rakenteelliset ja modaaliteetteihin liittyvät erot viitottujen ja puhuttujen kielten välillä hankaloittavat kehitystyötä, sillä luonnollisen kielen käsittelyn ja konekääntämisen olemassa olevien järjestelmien hyödyntäminen on hankalaa viitottujen kielten ominaisuuksien vuoksi. Useissa konekäännösjärjestelmissä viittomakieltä olisi käsiteltävä tekstimuodossa, jota useimmilla viittomakielillä ei ole. Viittomakielen konekäännöksen sovellusten kehittäminen vaatii käyttäjien, etenkin kuurojen käyttäjien, osallistamista kehityksen eri vaiheissa ja kehitys- ja tutkimustyön suorittamista monitieteellisissä työryhmissä. Vain näin saadaan aikaan käytettävyydeltään onnistuneita ja käyttäjien omaksumia sovelluksia

    TR-2015001: A Survey and Critique of Facial Expression Synthesis in Sign Language Animation

    Full text link
    Sign language animations can lead to better accessibility of information and services for people who are deaf and have low literacy skills in spoken/written languages. Due to the distinct word-order, syntax, and lexicon of the sign language from the spoken/written language, many deaf people find it difficult to comprehend the text on a computer screen or captions on a television. Animated characters performing sign language in a comprehensible way could make this information accessible. Facial expressions and other non-manual components play an important role in the naturalness and understandability of these animations. Their coordination to the manual signs is crucial for the interpretation of the signed message. Software to advance the support of facial expressions in generation of sign language animation could make this technology more acceptable for deaf people. In this survey, we discuss the challenges in facial expression synthesis and we compare and critique the state of the art projects on generating facial expressions in sign language animations. Beginning with an overview of facial expressions linguistics, sign language animation technologies, and some background on animating facial expressions, a discussion of the search strategy and criteria used to select the five projects that are the primary focus of this survey follows. This survey continues on to introduce the work from the five projects under consideration. Their contributions are compared in terms of support for specific sign language, categories of facial expressions investigated, focus range in the animation generation, use of annotated corpora, input data or hypothesis for their approach, and other factors. Strengths and drawbacks of individual projects are identified in the perspectives above. This survey concludes with our current research focus in this area and future prospects
    corecore