1,597 research outputs found

    ANGELICA : choice of output modality in an embodied agent

    Get PDF
    The ANGELICA project addresses the problem of modality choice in information presentation by embodied, humanlike agents. The output modalities available to such agents include both language and various nonverbal signals such as pointing and gesturing. For each piece of information to be presented by the agent it must be decided whether it should be expressed using language, a nonverbal signal, or both. In the ANGELICA project a model of the different factors influencing this choice will be developed and integrated in a natural language generation system. The application domain is the presentation of route descriptions by an embodied agent in a 3D environment. Evaluation and testing form an integral part of the project. In particular, we will investigate the effect of different modality choices on the effectiveness and naturalness of the generated presentations and on the user's perception of the agent's personality

    Design of a virtual human presenter

    Get PDF
    We have created a virtual human presenter who accepts speech texts with embedded commands as inputs. The presenter acts in real-time 3D animation synchronized with speech. The system was developed on the Jack animated-agent system. Jack provides a 3D graphical environment for controlling articulated figures, including detailed human model

    Design of a Virtual Human Presenter

    Get PDF
    We created a virtual human presenter based on extensions to the JackTM animated agent system. Inputs to the presenter system are in the form of speech texts with embedded commands, most of which relate to the virtual presenter\u27s body language. The system then makes him act as a presenter with presentation skills in real-time 3D animation synchronized with speech outputs. He can make presentations with virtual visual aids, with virtual 3D environments, or even on the WWW

    Gaze Behavior, Believability, Likability and the iCat

    Get PDF
    The iCat is a user-interface robot with the ability to express a range of emotions through its facial features. This paper summarizes our research whether we can increase the believability and likability of the iCat for its human partners through the application of gaze behaviour. Gaze behaviour serves several functions during social interaction such as mediating conversation flow, communicating emotional information and avoiding distraction by restricting visual input. There are several types of eye and head movements that are necessary for realizing these functions. We designed and evaluated a gaze behaviour system for the iCat robot that implements realistic models of the major types of eye and head movements found in living beings: vergence, vestibulo ocular reflexive, smooth pursuit movements and gaze shifts. We discuss how these models are integrated into the software environment of the iCat and can be used to create complex interaction scenarios. We report about some user tests and draw conclusions for future evaluation scenarios
    corecore