132 research outputs found

    Final Report to NSF of the Standards for Facial Animation Workshop

    Get PDF
    The human face is an important and complex communication channel. It is a very familiar and sensitive object of human perception. The facial animation field has increased greatly in the past few years as fast computer graphics workstations have made the modeling and real-time animation of hundreds of thousands of polygons affordable and almost commonplace. Many applications have been developed such as teleconferencing, surgery, information assistance systems, games, and entertainment. To solve these different problems, different approaches for both animation control and modeling have been developed

    Framework for proximal personified interfaces

    Get PDF

    Analysis domain model for shared virtual environments

    Get PDF
    The field of shared virtual environments, which also encompasses online games and social 3D environments, has a system landscape consisting of multiple solutions that share great functional overlap. However, there is little system interoperability between the different solutions. A shared virtual environment has an associated problem domain that is highly complex raising difficult challenges to the development process, starting with the architectural design of the underlying system. This paper has two main contributions. The first contribution is a broad domain analysis of shared virtual environments, which enables developers to have a better understanding of the whole rather than the part(s). The second contribution is a reference domain model for discussing and describing solutions - the Analysis Domain Model

    Gesture generation by imitation : from human behavior to computer character animation

    Get PDF
    This dissertation shows how to generate conversational gestures for an animated agent based on annotated text input. The central idea is to imitate the gestural behavior of human individuals. Using TV show recordings as empirical data, gestural key parameters are extracted for the generation of natural and individual gestures. For each of the three tasks in the generation pipeline a software was developed. The generic ANVIL annotation tool allows to transcribe gesture and speech in the empirical data. The NOVALIS module uses the annotations to compute individual gesture profiles with statistical methods. The NOVA generator creates gestures based on these profiles and heuristic rules, and outputs them in a linear script. In all, this work presents a complete work pipeline from collecting empirical data to obtaining an executable script and provides the necessary software, too.Die vorliegende Dissertation stellt einen Ansatz zur Generierung von Konversationsgesten fĂŒr animierte Agenten aus annotatiertem Textinput vor. Zentrale Idee ist es, die Gestik menschlicher Individuen zu imitieren. Als empirisches Material dient eine Fernsehsendung, aus der SchlĂŒsselparameter zur Generierung natĂŒrlicher und individueller Gesten extrahiert werden. Die Generierungsaufgabe wurde in drei Schritten mit eigens entwickelter Software gelöst. Das generische ANVIL-Annotationswerkzeug ermöglicht die Transkription von Gestik und Sprache in den empirischen Daten. Das NOVALIS-Modul berechnet aus den Annotationen individuelle Gestenprofile mit Hilfe statistischer Verfahren. Der NOVAGenerator erzeugt Gesten anhand dieser Profile und allgemeiner Heuristiken und gibt diese in Skriptform aus. Die Arbeit stellt somit einen vollstĂ€ndigen Arbeitspfad von empirischer Datenerhebung bis zum abspielfertigen Skript vor und liefert die entsprechenden Software-Werkzeuge dazu

    Gesture generation by imitation : from human behavior to computer character animation

    Get PDF
    This dissertation shows how to generate conversational gestures for an animated agent based on annotated text input. The central idea is to imitate the gestural behavior of human individuals. Using TV show recordings as empirical data, gestural key parameters are extracted for the generation of natural and individual gestures. For each of the three tasks in the generation pipeline a software was developed. The generic ANVIL annotation tool allows to transcribe gesture and speech in the empirical data. The NOVALIS module uses the annotations to compute individual gesture profiles with statistical methods. The NOVA generator creates gestures based on these profiles and heuristic rules, and outputs them in a linear script. In all, this work presents a complete work pipeline from collecting empirical data to obtaining an executable script and provides the necessary software, too.Die vorliegende Dissertation stellt einen Ansatz zur Generierung von Konversationsgesten fĂŒr animierte Agenten aus annotatiertem Textinput vor. Zentrale Idee ist es, die Gestik menschlicher Individuen zu imitieren. Als empirisches Material dient eine Fernsehsendung, aus der SchlĂŒsselparameter zur Generierung natĂŒrlicher und individueller Gesten extrahiert werden. Die Generierungsaufgabe wurde in drei Schritten mit eigens entwickelter Software gelöst. Das generische ANVIL-Annotationswerkzeug ermöglicht die Transkription von Gestik und Sprache in den empirischen Daten. Das NOVALIS-Modul berechnet aus den Annotationen individuelle Gestenprofile mit Hilfe statistischer Verfahren. Der NOVAGenerator erzeugt Gesten anhand dieser Profile und allgemeiner Heuristiken und gibt diese in Skriptform aus. Die Arbeit stellt somit einen vollstĂ€ndigen Arbeitspfad von empirischer Datenerhebung bis zum abspielfertigen Skript vor und liefert die entsprechenden Software-Werkzeuge dazu

    Northern Sparks

    Get PDF
    An “episode of light” in Canada sparked by Expo 67 when new art forms, innovative technologies, and novel institutional and policy frameworks emerged together. Understanding how experimental art catalyzes technological innovation is often prized yet typically reduced to the magic formula of “creativity.” In Northern Sparks, Michael Century emphasizes the role of policy and institutions by showing how novel art forms and media technologies in Canada emerged during a period of political and social reinvention, starting in the 1960s with the energies unleashed by Expo 67. Debunking conventional wisdom, Century reclaims innovation from both its present-day devotees and detractors by revealing how experimental artists critically challenge as well as discover and extend the capacities of new technologies. Century offers a series of detailed cross-media case studies that illustrate the cross-fertilization of art, technology, and policy. These cases span animation, music, sound art and acoustic ecology, cybernetic cinema, interactive installation art, virtual reality, telecommunications art, software applications, and the emergent metadiscipline of human-computer interaction. They include Norman McLaren's “proto-computational” film animations; projects in which the computer itself became an agent, as in computer-aided musical composition and choreography; an ill-fated government foray into interactive networking, the videotext system Telidon; and the beginnings of virtual reality at the Banff Centre. Century shows how Canadian artists approached new media technologies as malleable creative materials, while Canada undertook a political reinvention alongside its centennial celebrations. Northern Sparks offers a uniquely nuanced account of innovation in art and technology illuminated by critical policy analysis

    Video annotation wiki for South African sign language

    Get PDF
    Masters of ScienceThe SASL project at the University of the Western Cape aims at developing a fully automated translation system between English and South African Sign Language (SASL). Three important aspects of this system require SASL documentation and knowledge. These are: recognition of SASL from a video sequence, linguistic translation between SASL and English and the rendering of SASL. Unfortunately, SASL documentation is a scarce resource and no official or complete documentation exists. This research focuses on creating an online collaborative video annotation knowledge management system for SASL where various members of the community can upload SASL videos to and annotate them in any of the sign language notation systems, SignWriting, HamNoSys and/or Stokoe. As such, knowledge about SASL structure is pooled into a central and freely accessible knowledge base that can be used as required. The usability and performance of the system were evaluated. The usability of the system was graded by users on a rating scale from one to five for a specific set of tasks. The system was found to have an overall usability of 3.1, slightly better than average. The performance evaluation included load and stress tests which measured the system response time for a number of users for a specific set of tasks. It was found that the system is stable and can scale up to cater for an increasing user base by improving the underlying hardware.South Afric

    Interactions in Virtual Worlds:Proceedings Twente Workshop on Language Technology 15

    Get PDF

    Facial Modelling and animation trends in the new millennium : a survey

    Get PDF
    M.Sc (Computer Science)Facial modelling and animation is considered one of the most challenging areas in the animation world. Since Parke and Waters’s (1996) comprehensive book, no major work encompassing the entire field of facial animation has been published. This thesis covers Parke and Waters’s work, while also providing a survey of the developments in the field since 1996. The thesis describes, analyses, and compares (where applicable) the existing techniques and practices used to produce the facial animation. Where applicable, the related techniques are grouped in the same chapter and described in a chronological fashion, outlining their differences, as well as their advantages and disadvantages. The thesis is concluded by exploratory work towards a talking head for Northern Sotho. Facial animation and lip synchronisation of a fragment of Northern Sotho is done by using software tools primarily designed for English.Computin

    The Role of Emotional and Facial Expression in Synthesised Sign Language Avatars

    Get PDF
    This thesis explores the role that underlying emotional facial expressions might have in regards to understandability in sign language avatars. Focusing specifically on Irish Sign Language (ISL), we examine the Deaf community’s requirement for a visual-gestural language as well as some linguistic attributes of ISL which we consider fundamental to this research. Unlike spoken language, visual-gestural languages such as ISL have no standard written representation. Given this, we compare current methods of written representation for signed languages as we consider: which, if any, is the most suitable transcription method for the medical receptionist dialogue corpus. A growing body of work is emerging from the field of sign language avatar synthesis. These works are now at a point where they can benefit greatly from introducing methods currently used in the field of humanoid animation and, more specifically, the application of morphs to represent facial expression. The hypothesis underpinning this research is: augmenting an existing avatar (eSIGN) with various combinations of the 7 widely accepted universal emotions identified by Ekman (1999) to deliver underlying facial expressions, will make that avatar more human-like. This research accepts as true that this is a factor in improving usability and understandability for ISL users. Using human evaluation methods (Huenerfauth, et al., 2008) the research compares an augmented set of avatar utterances against a baseline set with regards to 2 key areas: comprehension and naturalness of facial configuration. We outline our approach to the evaluation including our choice of ISL participants, interview environment, and evaluation methodology. Remarkably, the results of this manual evaluation show that there was very little difference between the comprehension scores of the baseline avatars and those augmented withEFEs. However, after comparing the comprehension results for the synthetic human avatar “Anna” against the caricature type avatar “Luna”, the synthetic human avatar Anna was the clear winner. The qualitative feedback allowed us an insight into why comprehension scores were not higher in each avatar and we feel that this feedback will be invaluable to the research community in the future development of sign language avatars. Other questions asked in the evaluation focused on sign language avatar technology in a more general manner. Significantly, participant feedback in regard to these questions indicates a rise in the level of literacy amongst Deaf adults as a result of mobile technology
    • 

    corecore