896 research outputs found
RRL: A Rich Representation Language for the Description of Agent Behaviour in NECA
In this paper, we describe the Rich Representation Language (RRL) which is used in the NECA system. The NECA system generates interactions between two or more animated characters. The RRL is a formal framework for representing the information that is exchanged at the interfaces between the various NECA system modules
Considerations for believable emotional facial expression animation
Facial expressions can be used to communicate emotional states through the use of universal signifiers within key regions of the face. Psychology research has identified what these signifiers are and how different combinations and variations can be interpreted. Research into expressions has informed animation practice, but as yet very little is known about the movement within and between emotional expressions. A better understanding of sequence, timing, and duration could better inform the production of believable animation. This paper introduces the idea of expression choreography, and how tests of observer perception might enhance our understanding of moving emotional expressions
Artimate: an articulatory animation framework for audiovisual speech synthesis
We present a modular framework for articulatory animation synthesis using
speech motion capture data obtained with electromagnetic articulography (EMA).
Adapting a skeletal animation approach, the articulatory motion data is applied
to a three-dimensional (3D) model of the vocal tract, creating a portable
resource that can be integrated in an audiovisual (AV) speech synthesis
platform to provide realistic animation of the tongue and teeth for a virtual
character. The framework also provides an interface to articulatory animation
synthesis, as well as an example application to illustrate its use with a 3D
game engine. We rely on cross-platform, open-source software and open standards
to provide a lightweight, accessible, and portable workflow.Comment: Workshop on Innovation and Applications in Speech Technology (2012
3D performance capture for facial animation
This work describes how a photogrammetry based 3D capture system can be used as an input device for animation. The 3D Dynamic Capture System is used to capture the motion of a human face, which is extracted from a sequence of 3D models captured at TV frame rate. Initially the positions of a set of landmarks on the face are extracted. These landmarks are then used to provide motion data in two different ways. First, a high level description of the movements is extracted, and these can be used as input to a procedural animation package (i.e. CreaToon). Second the landmarks can be used as registration points for a conformation process where the model to be animated is modified to match the captured model. This approach gives a new sequence of models, which have the structure of the drawn model but the movement of the captured sequence
Socially communicative characters for interactive applications
Interactive Face Animation - Comprehensive Environment (iFACE) is a general-purpose software framework
that encapsulates the functionality of âface multimedia objectâ for a variety of interactive applications such as
games and online services. iFACE exposes programming interfaces and provides authoring and scripting tools to
design a face object, define its behaviours, and animate it through static or interactive situations. The framework
is based on four parameterized spaces of Geometry, Mood, Personality, and Knowledge that together form the
appearance and behaviour of the face object. iFACE can function as a common âface engineâ for design and runtime
environments to simplify the work of content and software developers
Multispace behavioral model for face-based affective social agents
This paper describes a behavioral model for affective social agents based on three independent but interacting parameter spaces:
knowledge, personality, andmood. These spaces control a lower-level geometry space that provides parameters at the facial feature
level. Personality and mood use findings in behavioral psychology to relate the perception of personality types and emotional
states to the facial actions and expressions through two-dimensional models for personality and emotion. Knowledge encapsulates
the tasks to be performed and the decision-making process using a specially designed XML-based language. While the geometry
space provides an MPEG-4 compatible set of parameters for low-level control, the behavioral extensions available through the
triple spaces provide flexible means of designing complicated personality types, facial expression, and dynamic interactive scenarios
Face modeling and animation language for MPEG-4 XMT framework
This paper proposes FML, an XML-based face modeling and animation language. FML provides a structured content description method for multimedia presentations based on face animation. The language can be used as direct input to compatible players, or be compiled within MPEG-4 XMT framework to create MPEG-4 presentations. The language allows parallel and sequential action description, decision-making and dynamic event-based scenarios, model configuration, and behavioral template definition. Facial actions include talking, expressions, head movements, and low-level MPEG-4 FAPs. The ShowFace and iFACE animation frameworks are also reviewed as example FML-based animation systems
Lip syncing method for realistic expressive 3D face model
Lip synchronization of 3D face model is now being used in a multitude of important fields. It brings a more human, social and dramatic reality to computer games, films and interactive multimedia, and is growing in use and importance. High level of realism can be used in demanding applications such as computer games and cinema. Authoring lip syncing with complex and subtle expressions is still difficult and fraught with problems in terms of realism. This research proposed a lip syncing method of realistic expressive 3D face model. Animated lips requires a 3D face model capable of representing the myriad shapes the human face experiences during speech and a method to produce the correct lip shape at the correct time. The paper presented a 3D face model designed to support lip syncing that align with input audio file. It deforms using Raised Cosine Deformation (RCD) function that is grafted onto the input facial geometry. The face model was based on MPEG-4 Facial Animation (FA) Standard. This paper proposed a method to animate the 3D face model over time to create animated lip syncing using a canonical set of visemes for all pairwise combinations of a reduced phoneme set called ProPhone. The proposed research integrated emotions by the consideration of Ekman model and Plutchikâs wheel with emotive eye movements by implementing Emotional Eye Movements Markup Language (EEMML) to produce realistic 3D face model. © 2017 Springer Science+Business Media New Yor
Socially expressive communication agents: A face-centric approach
Interactive Face Animation - Comprehensive Environment (iFACE) is a general purpose
software framework that encapsulates the functionality of âface multimedia objectâ.
iFACE exposes programming interfaces and provides authoring and scripting tools to design a
face object, define its behaviors, and animate it through static or interactive situations. The
framework is based on four parameterized spaces of Geometry, Mood, Personality, and
Knowledge that together form the appearance and behavior of the face object. iFACE
capabilities are demonstrated within the context of some artistic and educational projects
- âŠ