129 research outputs found

    Look me in the eyes: A survey of eye and gaze animation for virtual agents and artificial systems

    Get PDF
    International audienceA person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: "The face is the portrait of the mind; the eyes, its informers.". This presents a huge challenge for computer graphics researchers in the generation of artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human-human interactions. This State of the Art Report provides an overview of the efforts made on tackling this challenging task. As with many topics in Computer Graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We discuss the movement of the eyeballs, eyelids, and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Further, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye-gaze, during the expression of emotion or during conversation, and how they are synthesised in Computer Graphics and Robotics

    Fully generated scripted dialogue for embodied agents

    Get PDF
    This paper presents the NECA approach to the generation of dialogues between Embodied Conversational Agents (ECAs). This approach consist of the automated construction of an abstract script for an entire dialogue (cast in terms of dialogue acts), which is incrementally enhanced by a series of modules and finally ''performed'' by means of text, speech and body language, by a cast of ECAs. The approach makes it possible to automatically produce a large variety of highly expressive dialogues, some of whose essential properties are under the control of a user. The paper discusses the advantages and disadvantages of NECA's approach to Fully Generated Scripted Dialogue (FGSD), and explains the main techniques used in the two demonstrators that were built. The paper can be read as a survey of issues and techniques in the construction of ECAs, focusing on the generation of behaviour (i.e., focusing on information presentation) rather than on interpretation

    Semi-Autonomous Avatars: A New Direction for Expressive User Embodiment

    Get PDF
    Computer animated characters are rapidly becoming a regular part of our lives. They are starting to take the place of actors in films and television and are now an integral part of most computer games. Perhaps most interestingly in on-line games and chat rooms they are representing the user visually in the form of avatars, becoming our on-line identities, our embodiments in a virtual world. Currently online environments such as “Second Life” are being taken up by people who would not traditionally have considered playing games before, largely due to a greater emphasis on social interaction. These environments require avatars that are more expressive and that can make on-line social interactions seem more like face-to-face conversations. Computer animated characters come in many different forms. Film characters require a substantial amount of off-line animator effort to achieve high levels of quality; these techniques are not suitable for real time applications and are not the focus of this chapter. Non-player characters (typically the bad guys) in games use limited artificial intelligence to react autonomously to events in real time. However avatars are completely controlled by their users, reacting to events solely through user commands. This chapter will discuss the distinction between fully autonomous characters and completely controlled avatars and how the current differentiation may no longer be useful, given that avatar technology may need to include more autonomy to live up to the demands of mass appeal. We will firstly discuss the two categories and present reasons to combine them. We will then describe previous work in this area and finally present our own framework for semi-autonomous avatars

    Audio-driven Robot Upper-body Motion Synthesis

    Get PDF
    Body language is an important aspect of human communication, which an effective human-robot interaction interface should mimic well. The currently available robotic platforms are limited in their ability to automatically generate behaviours that align with their speech. In this paper, we developed a neural network based system that takes audio from a user as an input and generates upper-body gestures including head, hand and hip movements of the user on a humanoid robot, namely, Softbank Robotics’ Pepper. The developed system was evaluated quantitatively as well as qualitatively using web-surveys when driven by natural speech and synthetic speech. We particularly compared the impact of generic and person-specific neural network models on the quality of synthesised movements. We further investigated the relationships between quantitative and qualitative evaluations and examined how the speaker’s personality traits affect the synthesised movements

    Comparing and Evaluating Real Time Character Engines for Virtual Environments

    Get PDF
    As animated characters increasingly become vital parts of virtual environments, then the engines that drive these characters increasingly become vital parts of virtual environment software. This paper gives an overview of the state of the art in character engines, and proposes a taxonomy of the features that are commonly found in them. This taxonomy can be used as a tool for comparison and evaluation of different engines. In order to demonstrate this we use it to compare three engines. The first is Cal3D, the most commonly used open source engine. We also introduce two engines created by the authors, Piavca and HALCA. The paper ends with a brief discussion of some other popular engines

    Emotional avatars

    Get PDF

    Agents for educational games and simulations

    Get PDF
    This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications
    • …
    corecore