56 research outputs found

    An Animation Framework for Continuous Interaction with Reactive Virtual Humans

    Get PDF
    We present a complete framework for animation of Reactive Virtual Humans that offers a mixed animation paradigm: control of different body parts switches between keyframe animation, procedural animation and physical simulation, depending on the requirements of the moment. This framework implements novel techniques to support real-time continuous interaction. It is demonstrated on our interactive Virtual Conductor

    Speech-driven Animation with Meaningful Behaviors

    Full text link
    Conversational agents (CAs) play an important role in human computer interaction. Creating believable movements for CAs is challenging, since the movements have to be meaningful and natural, reflecting the coupling between gestures and speech. Studies in the past have mainly relied on rule-based or data-driven approaches. Rule-based methods focus on creating meaningful behaviors conveying the underlying message, but the gestures cannot be easily synchronized with speech. Data-driven approaches, especially speech-driven models, can capture the relationship between speech and gestures. However, they create behaviors disregarding the meaning of the message. This study proposes to bridge the gap between these two approaches overcoming their limitations. The approach builds a dynamic Bayesian network (DBN), where a discrete variable is added to constrain the behaviors on the underlying constraint. The study implements and evaluates the approach with two constraints: discourse functions and prototypical behaviors. By constraining on the discourse functions (e.g., questions), the model learns the characteristic behaviors associated with a given discourse class learning the rules from the data. By constraining on prototypical behaviors (e.g., head nods), the approach can be embedded in a rule-based system as a behavior realizer creating trajectories that are timely synchronized with speech. The study proposes a DBN structure and a training approach that (1) models the cause-effect relationship between the constraint and the gestures, (2) initializes the state configuration models increasing the range of the generated behaviors, and (3) captures the differences in the behaviors across constraints by enforcing sparse transitions between shared and exclusive states per constraint. Objective and subjective evaluations demonstrate the benefits of the proposed approach over an unconstrained model.Comment: 13 pages, 12 figures, 5 table

    Artificial Companion: building a impacting relation

    No full text
    International audienceIn this paper we show that we are in front of an evolution from traditional human-computer interactions to a kind of intense exchange between the human user and new generation of virtual or real systems -Embodied Conversational Agents (ECAs) or affective robots- bringing the interaction to another level, the "relation level". We call these systems "companions" that is to say systems with which the user wants to build a kind of life- long relationship. We thus argue that we need to go beyond the concepts acceptability and believability of system to get closer to human and look for "impact" concept. We will see that this problematic is shared between the community of researchers in Embodied Conversational Agents (ECAs) and in affective robotics fields. We put forward a definition of an "impacting relation" that will enable believable interactive ECAs or robots to become believable impacting companions

    A Cognitive and Affective Architecture for Social Human-Robot Interaction

    No full text
    International audienceRobots show up frequently in new applications in our daily lives where they interact more and more closely with the human user. Despite a long history of research, existing cognitive architectures are still too generic and hence not tailored enough to meet the specific needs demanded by social HRI. In particular, interaction-oriented architectures require handling emotions, language, social norms, etc, which is quite a handful. In this paper, we present an overview of a Cognitive and Affective Interaction-Oriented Architecture for social human-robot interactions abbreviated CAIO. This architecture is parallel to the BDI (Belief, Desire, Intention) architecture that comes from philosophy of actions by Bratman. CAIO integrates complex emotions and planning techniques. It aims to contribute to cognitive architectures for HRI by enabling the robot to reason on mental states (including emotions) of the interlocutors, and to act physically, emotionally and verbally

    From Thalamus to Skene: High-level behaviour planning and managing for mixed-reality characters

    Get PDF

    Look me in the eyes: A survey of eye and gaze animation for virtual agents and artificial systems

    Get PDF
    International audienceA person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: "The face is the portrait of the mind; the eyes, its informers.". This presents a huge challenge for computer graphics researchers in the generation of artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human-human interactions. This State of the Art Report provides an overview of the efforts made on tackling this challenging task. As with many topics in Computer Graphics, a cross-disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We discuss the movement of the eyeballs, eyelids, and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Further, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye-gaze, during the expression of emotion or during conversation, and how they are synthesised in Computer Graphics and Robotics

    Fully generated scripted dialogue for embodied agents

    Get PDF
    This paper presents the NECA approach to the generation of dialogues between Embodied Conversational Agents (ECAs). This approach consist of the automated construction of an abstract script for an entire dialogue (cast in terms of dialogue acts), which is incrementally enhanced by a series of modules and finally ''performed'' by means of text, speech and body language, by a cast of ECAs. The approach makes it possible to automatically produce a large variety of highly expressive dialogues, some of whose essential properties are under the control of a user. The paper discusses the advantages and disadvantages of NECA's approach to Fully Generated Scripted Dialogue (FGSD), and explains the main techniques used in the two demonstrators that were built. The paper can be read as a survey of issues and techniques in the construction of ECAs, focusing on the generation of behaviour (i.e., focusing on information presentation) rather than on interpretation

    Agents united:An open platform for multi-agent conversational systems

    Get PDF
    The development of applications with intelligent virtual agents (IVA) often comes with integration of multiple complex components. In this article we present the Agents United Platform: an open source platform that researchers and developers can use as a starting point to setup their own multi-IVA applications. The new platform provides developers with a set of integrated components in a sense-remember-think-act architecture. Integrated components are a sensor framework, memory component, Topic Selection Engine, interaction manager (Flipper), two dialogue execution engines, and two behaviour realisers (ASAP and GRETA) of which the agents can seamlessly interact with each other. This article discusses the platform and its individual components. It also highlights some of the novelties that arise from the integration of components and elaborates on directions for future work
    • …
    corecore