7 research outputs found

    Real Time Animation of Virtual Humans: A Trade-off Between Naturalness and Control

    Get PDF
    Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or ‘natural’) and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in the trade-off they offer between motion naturalness and the control that can be exerted over the motion. We show mechanisms to parametrize, combine (on different body parts) and concatenate motions generated by different animation techniques. We discuss several aspects of motion naturalness and show how it can be evaluated. We conclude by showing the promise of combinations of different animation paradigms to enhance both naturalness and control

    Uma linguagem expansível para descrição da Língua Brasileira de Sinais

    Get PDF
    Deaf communicate naturally through gestural and visual languages called sign languages. These languages are natural, composed by lexical items called signs and have their own vocabulary and grammar. In this paper, we propose the definition of a expressive and consistent language to describe signs in Brazilian Sign Language (LIBRAS). This language allows the definition of all parameters of a sign and consequently the generation of an animation for this sign. In addition, the proposed language is flexible in the sense that new parameters (or phonemes) can be defined dynamically. In order to provide a case study for the proposed language, a system for collaborative construction of a LIBRAS vocabulary based on 3D humanoids avatars was also developed. Some tests with Brazilian deaf users were also performed to evaluate the proposal.NenhumaOs surdos se comunicam naturalmente por línguas gestuais e visuais, denominadas línguas de sinais. Essas línguas são línguas naturais compostas por itens lexicais denominados sinais e possuem gramáticas e vocabulários próprios. Neste trabalho, propomos a definição de umalinguagemexpressivaecoerenteparadescreversinaisem língua brasileira de sinais (LIBRAS). Esta linguagem permite que todos os parâmetros constituintes de um sinal sejam especificados e, a partir deles, uma animação para aquele sinal possa ser gerada com base em avatares humanóide 3D. Além disso, a linguagem proposta é flexível e permite que novos parâmetros (ou fonemas) sejam especificados dinamicamente. Para prover um estudo de caso para a linguagem proposta, um sistema para construção colaborativa de um vocabulário deLIBRAScom base em avatares humanóides 3D tambémfoidesenvolvido.Umconjuntodetestescom surdos brasileiros também foi realizado para avaliar este sistema

    The Role of Emotional and Facial Expression in Synthesised Sign Language Avatars

    Get PDF
    This thesis explores the role that underlying emotional facial expressions might have in regards to understandability in sign language avatars. Focusing specifically on Irish Sign Language (ISL), we examine the Deaf community’s requirement for a visual-gestural language as well as some linguistic attributes of ISL which we consider fundamental to this research. Unlike spoken language, visual-gestural languages such as ISL have no standard written representation. Given this, we compare current methods of written representation for signed languages as we consider: which, if any, is the most suitable transcription method for the medical receptionist dialogue corpus. A growing body of work is emerging from the field of sign language avatar synthesis. These works are now at a point where they can benefit greatly from introducing methods currently used in the field of humanoid animation and, more specifically, the application of morphs to represent facial expression. The hypothesis underpinning this research is: augmenting an existing avatar (eSIGN) with various combinations of the 7 widely accepted universal emotions identified by Ekman (1999) to deliver underlying facial expressions, will make that avatar more human-like. This research accepts as true that this is a factor in improving usability and understandability for ISL users. Using human evaluation methods (Huenerfauth, et al., 2008) the research compares an augmented set of avatar utterances against a baseline set with regards to 2 key areas: comprehension and naturalness of facial configuration. We outline our approach to the evaluation including our choice of ISL participants, interview environment, and evaluation methodology. Remarkably, the results of this manual evaluation show that there was very little difference between the comprehension scores of the baseline avatars and those augmented withEFEs. However, after comparing the comprehension results for the synthetic human avatar “Anna” against the caricature type avatar “Luna”, the synthetic human avatar Anna was the clear winner. The qualitative feedback allowed us an insight into why comprehension scores were not higher in each avatar and we feel that this feedback will be invaluable to the research community in the future development of sign language avatars. Other questions asked in the evaluation focused on sign language avatar technology in a more general manner. Significantly, participant feedback in regard to these questions indicates a rise in the level of literacy amongst Deaf adults as a result of mobile technology

    High-level Specification and Animation of Communicative Gestures

    No full text
    This paper describes a complete system for the specification and the generation of visual communicative gestures. A high-level language for the specification of hand–arm communication gestures was developed. This language is based on both a discrete description of space, and a movement decomposition inspired by sign language gestures. Communication gestures are represented by symbolic commands that can be described by qualitative data, and translated in terms of spatio-temporal targets that drive the generation system. This approach is well-suited for a class of generation models controlled by key-points information at the trajectory level. The animation model used in our approach is composed of a set of sensory-motor control loops. Each of these models computes in real-time updated angular coordinates of the articulatory structure from the minimization of the distance between target and current locations. At the same time, psycho-motor laws of biological movement are satisfied. The whole control system is applied to the synthesis of communication and sign language gestures. A synthetic character is animated and some results are presented. � 2001 Academic Press 1
    corecore