32 research outputs found

    Implementing Expressive Gesture Synthesis for Embodied Conversational Agents

    Get PDF
    We aim at creating an expressive Embodied Conversational Agent (ECA) and address the problem of synthesizing expressive agent gestures. In our previous work, we have described the gesture selection process. In this paper, we present a computational model of gesture quality. Once a certain gesture has been chosen for execution, how can we modify it to carry a desired expressive content while retaining its original semantics? We characterize bodily expressivity with a small set of dimensions derived from a review of psychology literature. We provide a detailed description of the implementation of these dimensions in our animation system, including our gesture modeling language. We also demonstrate animations with different expressivity settings in our existing ECA system. Finally, we describe two user studies we undertook to evaluate the appropriateness of our implementation for each dimension of expressivity as well as the potential of combining these dimensions to create expressive gestures that reflect communicative intent

    Implementing distinctive behavior for conversational agents

    No full text
    We aim to define conversational agents exhibiting distinctive behavior. To this aim we provide a small set of parameters to allow one to define behavior profiles and then leave to the system the task of animating the agents. Our approach is to manipulate the behavior tendency of the agents depending on their communicative intention and emotional state. In this paper we define the concepts of Baseline and Dynamicline. The Baseline of an agent is defined as a set of fixed parameters that represent the personalized agent behavior, while the Dynamicline is a set of parameters that derive both from the Baseline and the current communicative intention and emotional state. \ua9 2009 Springer Berlin Heidelberg

    Perception of Emotions from Static Postures

    No full text

    Improving the Believability of Virtual Characters Using Qualitative Gesture Analysis

    No full text
    This paper describes preliminary results of a research performed in the framework of the Enactive project (EU IST NoE Enactive). The aim of this research is to improve believability of a virtual character using qualitative analysis of gesture. Using techniques developed for human gesture analysis, we show it is possible to extract high-level motion features from reconstructed motion and to compare them with the same features extracted from the corresponding real motions. Moreover this method allows us to evaluate whether the virtual character conveys the same high level expressive content as the real motion does, and makes it possible to compare different rendering techniques in order to assess which one better maintains such information

    Multimodal sensing, interpretation and copying of movements by a virtual agent

    No full text
    Abstract. We present a scenario whereby an agent senses, interprets and copies a range of facial and gesture expression from a person in the real-world. Input is obtained via a video camera and processed initially using computer vision techniques. It is then processed further in a framework for agent perception, planning and behaviour generation in order to perceive, interpret and copy a number of gestures and facial expressions corresponding to those made by the human. By perceive, wemeanthat the copied behaviour may not be an exact duplicate of the behaviour made by the human and sensed by the agent, but may rather be based on some level of interpretation of the behaviour. Thus, the copied behaviour may be altered and need not share all of the characteristics of the original made by the human.

    A listening agent exhibiting variable behaviour

    No full text
    Within the Sensitive Artificial Listening Agent project, we propose a system that computes the behaviour of a listening agent. Such an agent must exhibit behaviour variations depending not only on its mental state towards the interaction (e.g., if it agrees or not with the speaker) but also on the agent's characteristics such as its emotional traits and its behaviour style. Our system computes the behaviour of the listening agent in real-time. \ua9 2008 Springer-Verlag Berlin Heidelberg
    corecore