5 research outputs found

    Data-Driven Synthesis and Evaluation of Syntactic Facial Expressions in American Sign Language Animation

    Full text link
    Technology to automatically synthesize linguistically accurate and natural-looking animations of American Sign Language (ASL) would make it easier to add ASL content to websites and media, thereby increasing information accessibility for many people who are deaf and have low English literacy skills. State-of-art sign language animation tools focus mostly on accuracy of manual signs rather than on the facial expressions. We are investigating the synthesis of syntactic ASL facial expressions, which are grammatically required and essential to the meaning of sentences. In this thesis, we propose to: (1) explore the methodological aspects of evaluating sign language animations with facial expressions, and (2) examine data-driven modeling of facial expressions from multiple recordings of ASL signers. In Part I of this thesis, we propose to conduct rigorous methodological research on how experiment design affects study outcomes when evaluating sign language animations with facial expressions. Our research questions involve: (i) stimuli design, (ii) effect of videos as upper baseline and for presenting comprehension questions, and (iii) eye-tracking as an alternative to recording question-responses from participants. In Part II of this thesis, we propose to use generative models to automatically uncover the underlying trace of ASL syntactic facial expressions from multiple recordings of ASL signers, and apply these facial expressions to manual signs in novel animated sentences. We hypothesize that an annotated sign language corpus, including both the manual and non-manual signs, can be used to model and generate linguistically meaningful facial expressions, if it is combined with facial feature extraction techniques, statistical machine learning, and an animation platform with detailed facial parameterization. To further improve sign language animation technology, we will assess the quality of the animation generated by our approach with ASL signers through the rigorous evaluation methodologies described in Part I

    Facilitating American Sign Language learning for hearing parents of deaf children via mobile devices

    Get PDF
    In the United States, between 90 and 95% of deaf children are born to hearing parents. In most circumstances, the birth of a deaf child is the first experience these parents have with American Sign Language (ASL) and the Deaf community. Parents learn ASL as a second language to provide their children with language models and to be able to communicate with their children more effectively, but they face significant challenges. To address these challenges, I have developed a mobile learning application, SMARTSign, to help parents of deaf children learn ASL vocabulary. I hypothesize that providing a method for parents to learn and practice ASL words associated with popular children's stories on their mobile phones would help improve their ASL vocabulary and abilities more than if words were grouped by theme. I posit that parents who learn vocabulary associated with children's stories will use the application more, which will lead to more exposure to ASL and more learned vocabulary. My dissertation consists of three studies. First I show that novices are able to reproduce signs presented on mobile devices with high accuracy regardless of source video resolution. Next, I interview hearing parents with deaf children to discover the difficulties they have with current methods for learning ASL. When asked which methods of presenting signs they preferred, participants were most interested in learning vocabulary associated with children's stories. Finally, I deploy SMARTSign to parents for four weeks. Participants learning story vocabulary used the application more often and had higher sign recognition scores than participants who learned vocabulary based on word types. The condition did not affect participants' ability to produce the signed vocabulary.PhDCommittee Chair: Starner, Thad; Committee Member: Abowd, Gregory; Committee Member: Bruckman, Amy; Committee Member: Guzdial, Mark; Committee Member: Quinto-Pozos, David; Committee Member: Singleton, Jenn

    Modelo de referência para desenvolvimento de artefatos de apoio ao acesso dos surdos ao audiovisual

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico. Ptograma de Pós-graduação em Engenharia e Gestão do ConhecimentoAs tecnologias da informação e comunicação possibilitam a participação do sujeito na sociedade do conhecimento, entretanto o tema da acessibilidade dos surdos aos conteúdos audiovisuais em meios digitais ainda demanda estudos para viabilizar sua efetiva e ampla adoção. Objetiva-se identificar e analisar as alternativas para o desenvolvimento de um modelo de referência que oriente o reuso de processos, métodos e técnicas para a produção de artefatos que promovam a acessibilidade de surdos aos conteúdos audiovisuais em plataformas digitais. A partir de uma revisão sistemática da literatura são apontados recomendações para apresentação de conteúdo audiovisual acessível ao público surdo, os requisitos que devem ser atendidos para promover as estratégias de acesso utilizadas por diferentes perfis de surdos, e enumeradas alternativas que podem apoiar estas demandas, como o uso de legendas textuais e com janela de língua de sinais. O modelo de referência contempla a produção de conteúdos a partir da tradução do material audiovisual, sendo identificadas e elaboradas recomendações para a geração de legendas em vídeo de língua de sinais ou na forma escrita. Busca-se integrar a produção destes tipos de artefatos, por meio de processos manuais ou automáticos sendo identificadas as mídias que apoiam ou são resultantes dos processos de produção de artefatos de apoio a acessibilidade. O modelo de referência é validado diante a consulta a especialistas e aplicado em uma implementação de referência de um sistema para acessibilidade com cenários de entrega na televisão digital interativa e na web. Como resultados são apresentadas as recomendações e alternativas em relação aos processos e mídias necessárias para a acessibilidade dos surdos ao audiovisual digital

    Practical, appropriate, empirically-validated guidelines for designing educational games

    Get PDF
    There has recently been a great deal of interest in the potential of computer games to function as innovative educational tools. However, there is very little evidence of games fulfilling that potential. Indeed, the process of merging the disparate goals of education and games design appears problematic, and there are currently no practical guidelines for how to do so in a coherent manner. In this paper, we describe the successful, empirically validated teaching methods developed by behavioural psychologists and point out how they are uniquely suited to take advantage of the benefits that games offer to education. We conclude by proposing some practical steps for designing educational games, based on the techniques of Applied Behaviour Analysis. It is intended that this paper can both focus educational games designers on the features of games that are genuinely useful for education, and also introduce a successful form of teaching that this audience may not yet be familiar with

    A web-based user survey for evaluating power saving strategies for deaf users of mobileASL

    No full text
    MobileASL is a video compression project for two-way, real-time video communication on cell phones, allowing Deaf people to communicate in the language most accessible to them, American Sign Language. Unfortunately, running MobileASL quickly depletes a full battery charge in a few hours. Previous work on MobileASL investigated a method called variable frame rate (VFR) to increase the battery duration. We expand on this previous work by creating two new power saving algorithms, variable spatial resolution (VSR), and the application of both VFR and VSR. These algorithms extend the battery life by altering the temporal and/or spatial resolutions of video transmitted on MobileASL. We found that implementing only VFR extended the battery life from 284 minutes to 307 minutes; implementing only VSR extended the battery life to 306 minutes, and implementing both VFR and VSR extended the battery life to 315 minutes. We evaluated all three algorithms by creating a linguistically accessible online survey to investigate Deaf people’s perceptions of video quality when these algorithms were applied. In our survey results, we found that VFR produces perceived video choppiness and VSR produces perceived video blurriness; however, a surprising finding was that when both VFR and VSR are used together, they largely ameliorate the choppiness and blurriness perceived, i.e., they each improve the use of the other. This is a useful finding because using VFR and VSR together saves the most battery life
    corecore