3,509 research outputs found

    Affective interactions between expressive characters

    Get PDF
    When people meet in virtual worlds they are represented by computer animated characters that lack a variety of expression and can seem stiff and robotic. By comparison human bodies are highly expressive; a casual observation of a group of people mil reveals a large diversity of behavior, different postures, gestures and complex patterns of eye gaze. In order to make computer mediated communication between people more like real face-to-face communication, it is necessary to add an affective dimension. This paper presents Demeanour, an affective semi-autonomous system for the generation of realistic body language in avatars. Users control their avatars that in turn interact autonomously with other avatars to produce expressive behaviour. This allows people to have affectively rich interactions via their avatars

    An Eye Gaze Model for Controlling the Display of Social Status in Believable Virtual Humans

    Get PDF
    Abstract—Designing highly believable characters remains a major concern within digital games. Matching a chosen personality and other dramatic qualities to displayed behavior is an important part of improving overall believability. Gaze is a critical component of social exchanges and serves to make characters engaging or aloof, as well as to establish character’s role in a conversation. In this paper, we investigate the communication of status related social signals by means of a virtual human’s eye gaze. We constructed a cross-domain verbal-conceptual computational model of gaze for virtual humans to facilitate the display of social status. We describe the validation of the model’s parameters, including the length of eye contact and gazes, movement velocity, equilibrium response, and head and body posture. In a first set of studies, conducted on Amazon Mechanical Turk using prerecorded video clips of animated characters, we found statistically significant differences in how the characters’ status was rated based on the variation in social status. In a second step based on these empirical findings, we designed an interactive system that incorporates dynamic eye tracking and spoken dialog, along with real-time control of a virtual character. We evaluated the model using a presential, interactive scenario of a simulated hiring interview. Corroborating our previous finding, the interactive study yielded significant differences in perception of status were found (p = .046). Thus, we believe status is an important aspect of dramatic believability, and accordingly, this paper presents our social eye gaze model for realistic procedurally animated characters and shows its efficacy. Index Terms—procedural animation, believable characters, virtual human, gaze, social interaction, nonverbal behaviour, video game

    Customisation and Context for Expressive Behaviour in the Broadband World

    Get PDF
    The introduction of consumer broadband makes it possible to have an emotionally much richer experience of the internet. One way of achieving this is the use of animated characters endowed with emotionally expressive behaviour. This paper describes Demeanour, a framework for generating expressive behaviour, developed collaboratively by University College London and BT plc. The focus of this paper will be on two important aspects; the customisation of expressive behaviour and how expressive behaviour can be made context dependent. Customisation is a very popular feature for internet software, particularly as it allows users to present a specific identity to other users; the ability to customise beahviour will increase this sense of identity. Demeanour supports a number of user friendly methods for customisng behaviour, all of which use a character profile that ultimately controls the behaviour of the character. What counts as appropriate behaviour is highly dependent on the context, where you are, who you are talking to, whether you have a particular job or role. It is therefore very important that characters are able to exhibit different behaviours in different contexts. Demeanour allows characters to load different profiles in different contexts and therefore produce different behaviour

    Look me in the eyes: A survey of eye and gaze animation for virtual agents and artificial systems

    Get PDF
    International audienceA person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: "The face is the portrait of the mind; the eyes, its informers.". This presents a huge challenge for computer graphics researchers in the generation of artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human-human interactions. This State of the Art Report provides an overview of the efforts made on tackling this challenging task. As with many topics in Computer Graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We discuss the movement of the eyeballs, eyelids, and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Further, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye-gaze, during the expression of emotion or during conversation, and how they are synthesised in Computer Graphics and Robotics

    E-Drama: Facilitating Online Role-play using an AI Actor and Emotionally Expressive Characters.

    Get PDF
    This paper describes a multi-user role-playing environment, e-drama, which enables groups of people to converse online, in scenario driven virtual environments. The starting point of this research – edrama – is a 2D graphical environment in which users are represented by static cartoon figures. An application has been developed to enable integration of the existing edrama tool with several new components to support avatars with emotionally expressive behaviours, rendered in a 3D environment. The functionality includes the extraction of affect from open-ended improvisational text. The results of the affective analysis are then used to: (a) control an automated improvisational AI actor – EMMA (emotion, metaphor and affect) that operates a bit-part character in the improvisation; (b) drive the animations of avatars using the Demeanour framework in the user interface so that they react bodily in ways that are consistent with the affect that they are expressing. Finally, we describe user trials that demonstrate that the changes made improve the quality of social interaction and users’ sense of presence. Moreover, our system has the potential to evolve normal classroom education for young people with or without learning disabilities by providing 24/7 efficient personalised social skill, language and career training via role-play and offering automatic monitoring

    Lip syncing method for realistic expressive 3D face model

    Get PDF
    Lip synchronization of 3D face model is now being used in a multitude of important fields. It brings a more human, social and dramatic reality to computer games, films and interactive multimedia, and is growing in use and importance. High level of realism can be used in demanding applications such as computer games and cinema. Authoring lip syncing with complex and subtle expressions is still difficult and fraught with problems in terms of realism. This research proposed a lip syncing method of realistic expressive 3D face model. Animated lips requires a 3D face model capable of representing the myriad shapes the human face experiences during speech and a method to produce the correct lip shape at the correct time. The paper presented a 3D face model designed to support lip syncing that align with input audio file. It deforms using Raised Cosine Deformation (RCD) function that is grafted onto the input facial geometry. The face model was based on MPEG-4 Facial Animation (FA) Standard. This paper proposed a method to animate the 3D face model over time to create animated lip syncing using a canonical set of visemes for all pairwise combinations of a reduced phoneme set called ProPhone. The proposed research integrated emotions by the consideration of Ekman model and Plutchik’s wheel with emotive eye movements by implementing Emotional Eye Movements Markup Language (EEMML) to produce realistic 3D face model. © 2017 Springer Science+Business Media New Yor
    corecore