1,206 research outputs found

    INGREDIBLE : A platform for full body interaction between human and virtual agent that improves co-presence

    No full text
    International audienceThis paper presents a platform dedicated to a full body interaction between a virtual agent and human or between two virtual agents. It is based on the notion of coupling and the metaphor of the alive communication that come from studies in psychology. The platform, based on a modular architecture , is composed of modules that communicate through messages. Four modules have been implemented for human tracking, motion analysis, decision computation and rendering. The paper describes all of them. Part of the decision module is generic, that is it could be used for different interactions based on sensorimotor, while part of it is strictly dependent on the type of scenario one wants to obtain. An application example for a fitness exergame scenario is also presented in this work

    TranSTYLer: Multimodal Behavioral Style Transfer for Facial and Body Gestures Generation

    Full text link
    This paper addresses the challenge of transferring the behavior expressivity style of a virtual agent to another one while preserving behaviors shape as they carry communicative meaning. Behavior expressivity style is viewed here as the qualitative properties of behaviors. We propose TranSTYLer, a multimodal transformer based model that synthesizes the multimodal behaviors of a source speaker with the style of a target speaker. We assume that behavior expressivity style is encoded across various modalities of communication, including text, speech, body gestures, and facial expressions. The model employs a style and content disentanglement schema to ensure that the transferred style does not interfere with the meaning conveyed by the source behaviors. Our approach eliminates the need for style labels and allows the generalization to styles that have not been seen during the training phase. We train our model on the PATS corpus, which we extended to include dialog acts and 2D facial landmarks. Objective and subjective evaluations show that our model outperforms state of the art models in style transfer for both seen and unseen styles during training. To tackle the issues of style and content leakage that may arise, we propose a methodology to assess the degree to which behavior and gestures associated with the target style are successfully transferred, while ensuring the preservation of the ones related to the source content
    • …
    corecore