787 research outputs found

    Agents for educational games and simulations

    Get PDF
    This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications

    The Role of Emotional and Facial Expression in Synthesised Sign Language Avatars

    Get PDF
    This thesis explores the role that underlying emotional facial expressions might have in regards to understandability in sign language avatars. Focusing specifically on Irish Sign Language (ISL), we examine the Deaf community’s requirement for a visual-gestural language as well as some linguistic attributes of ISL which we consider fundamental to this research. Unlike spoken language, visual-gestural languages such as ISL have no standard written representation. Given this, we compare current methods of written representation for signed languages as we consider: which, if any, is the most suitable transcription method for the medical receptionist dialogue corpus. A growing body of work is emerging from the field of sign language avatar synthesis. These works are now at a point where they can benefit greatly from introducing methods currently used in the field of humanoid animation and, more specifically, the application of morphs to represent facial expression. The hypothesis underpinning this research is: augmenting an existing avatar (eSIGN) with various combinations of the 7 widely accepted universal emotions identified by Ekman (1999) to deliver underlying facial expressions, will make that avatar more human-like. This research accepts as true that this is a factor in improving usability and understandability for ISL users. Using human evaluation methods (Huenerfauth, et al., 2008) the research compares an augmented set of avatar utterances against a baseline set with regards to 2 key areas: comprehension and naturalness of facial configuration. We outline our approach to the evaluation including our choice of ISL participants, interview environment, and evaluation methodology. Remarkably, the results of this manual evaluation show that there was very little difference between the comprehension scores of the baseline avatars and those augmented withEFEs. However, after comparing the comprehension results for the synthetic human avatar “Anna” against the caricature type avatar “Luna”, the synthetic human avatar Anna was the clear winner. The qualitative feedback allowed us an insight into why comprehension scores were not higher in each avatar and we feel that this feedback will be invaluable to the research community in the future development of sign language avatars. Other questions asked in the evaluation focused on sign language avatar technology in a more general manner. Significantly, participant feedback in regard to these questions indicates a rise in the level of literacy amongst Deaf adults as a result of mobile technology

    Presenting in Virtual Worlds: An Architecture for a 3D Anthropomorphic Presenter

    Get PDF
    Multiparty-interaction technology is changing entertainment, education, and training. Deployed examples of such technology include embodied agents and robots that act as a museum guide, a news presenter, a teacher, a receptionist, or someone trying to sell you insurance, homes, or tickets. In all these cases, the embodied agent needs to explain and describe. This article describes the design of a 3D virtual presenter that uses different output channels (including speech and animation of posture, pointing, and involuntary movements) to present and explain. The behavior is scripted and synchronized with a 2D display containing associated text and regions (slides, drawings, and paintings) at which the presenter can point. This article is part of a special issue on interactive entertainment

    Reusable, Interactive, Multilingual Online Avatars

    Get PDF
    This paper details a system for delivering reusable, interactive multilingual avatars in online children’s games. The development of these avatars is based on the concept of an intelligent media object that can be repurposed across different productions. The system is both language and character independent, allowing content to be reused in a variety of contexts and locales. In the current implementation, the user is provided with an interactive animated robot character that can be dressed with a range of body parts chosen by the user in real-time. The robot character reacts to each selection of a new part in a different manner, relative to simple narrative constructs that define a number of scripted responses. Once configured, the robot character subsequently appears as a help avatar throughout the rest of the game. At time of writing, the system is currently in beta testing on the My Tiny Planets website to fully assess its effectiveness

    An Actor-Centric Approach to Facial Animation Control by Neural Networks For Non-Player Characters in Video Games

    Get PDF
    Game developers increasingly consider the degree to which character animation emulates facial expressions found in cinema. Employing animators and actors to produce cinematic facial animation by mixing motion capture and hand-crafted animation is labor intensive and therefore expensive. Emotion corpora and neural network controllers have shown promise toward developing autonomous animation that does not rely on motion capture. Previous research and practice in disciplines of Computer Science, Psychology and the Performing Arts have provided frameworks on which to build a workflow toward creating an emotion AI system that can animate the facial mesh of a 3d non-player character deploying a combination of related theories and methods. However, past investigations and their resulting production methods largely ignore the emotion generation systems that have evolved in the performing arts for more than a century. We find very little research that embraces the intellectual process of trained actors as complex collaborators from which to understand and model the training of a neural network for character animation. This investigation demonstrates a workflow design that integrates knowledge from the performing arts and the affective branches of the social and biological sciences. Our workflow begins at the stage of developing and annotating a fictional scenario with actors, to producing a video emotion corpus, to designing training and validating a neural network, to analyzing the emotion data annotation of the corpus and neural network, and finally to determining resemblant behavior of its autonomous animation control of a 3d character facial mesh. The resulting workflow includes a method for the development of a neural network architecture whose initial efficacy as a facial emotion expression simulator has been tested and validated as substantially resemblant to the character behavior developed by a human actor

    Modeling the Speed and Timing of American Sign Language to Generate Realistic Animations

    Get PDF
    While there are many Deaf or Hard of Hearing (DHH) individuals with excellent reading literacy, there are also some DHH individuals who have lower English literacy. American Sign Language (ASL) is not simply a method of representing English sentences. It is possible for an individual to be fluent in ASL, while having limited fluency in English. To overcome this barrier, we aim to make it easier to generate ASL animations for websites, through the use of motion-capture data recorded from human signers to build different predictive models for ASL animations; our goal is to automate this aspect of animation synthesis to create realistic animations. This dissertation consists of several parts: Part I, defines key terminology for timing and speed parameters, and surveys literature on prior linguistic and computational research on ASL. Next, the motion-capture data that our lab recorded from human signers is discussed, and details are provided about how we enhanced this corpus to make it useful for speed and timing research. Finally, we present the process of adding layers of linguistic annotation and processing this data for speed and timing research. Part II presents our research on data-driven predictive models for various speed and timing parameters of ASL animations. The focus is on predicting the (1) existence of pauses after each ASL sign, (2) predicting the time duration of these pauses, and (3) predicting the change of speed for each ASL sign within a sentence. We measure the quality of the proposed models by comparing our models with state-of-the-art rule-based models. Furthermore, using these models, we synthesized ASL animation stimuli and conducted a user-based evaluation with DHH individuals to measure the usability of the resulting animation. Finally, Part III presents research on whether the timing parameters individuals prefer for animation may differ from those in recordings of human signers. Furthermore, it also includes research to investigate the distribution of acceleration curves in recordings of human signers and whether utilizing a similar set of curves in ASL animations leads to measurable improvements in DHH users\u27 perception of animation quality

    I am here - are you there? Sense of presence and implications for virtual world design

    Get PDF
    We use the language of presence and place when we interact online: in our instant text messaging windows we often post: Are you there? Research indicates the importance of the sense of presence for computer-supported collaborative virtual learning. To realize the potential of virtual worlds such as Second Life, which may have advantages over conventional text-based environments, we need an understanding of design and the emergence of the sense of presence. A construct was created for the sense of presence, as a collaborative, action-based process (Spagnolli, Varotto, & Mantovani, 2003) with four dimensions (sense of place, social presence, individual agency, and mediated collaborative actions). Nine design principles were mapped against the four dimensions. The guiding question for the study\u27s exploration of the sense of presence was: In the virtual world Second Life, what is the effect on the sense of presence in collaborative learning spaces designed according to the sense of presence construct proposed, using two of the nine design principles, wayfinding and annotation? Another question of interest was: What are the relationships, if any, among the four dimensions of presence? The research utilized both quantitative and qualitative measures. Twenty learners recruited from the Graduate School of Education and Psychology at Pepperdine University carried out three assigned collaborative activities in Second Life under design conditions foregrounding each of the two design conditions, and a combination of the two. Analyses from surveys, Second Life interactions, interviews and a focus group were conducted to investigate how various designed learning environments based in the virtual world contributed to the sense of presence, and to learners\u27 ability to carry out collaborative learning. The major research findings were: (a) the construct appears robust, and future research in its application to other virtual worlds may be fruitful; (b) the experience of wayfinding (finding a path through a virtual space) resulted overall in an observed pattern of a slightly stronger sense of place; (c) the experience of annotation (building) resulted overall in an observed pattern of a slightly stronger sense of agency; and (d) there is a positive association between sense of place and sense of agency

    Dynamic Scene Creation from Text

    Get PDF
    Visual information is an integral part of our daily life. Typically, it tends to convey more information than simple textual information. A visual depiction of a textual story, as an animation or video, provides a more engaging and realistic experience and can be used in different applications. Examples of such applications include but are not limited to education, advertisement, crime scene investigation, forensic analysis of a crime, treatment of different types of mental and psychological disorders, etc. Manual 3D scene creation is a time-consuming process and requires expertise of individuals familiar with the content creation environment. Automatic scene generation using textual description and a library of developed components offers a quick and easy alternative for manual scene representation and proof of concept ideas. In this thesis, we propose a scheme for extraction of objects of interest and their spatial relationships from a user-provided textual description to create a 3D dynamic scene and animation to make it more realistic
    • …
    corecore