4,858 research outputs found

    Fourteenth Biennial Status Report: März 2017 - February 2019

    No full text

    Sharing emotions and space - empathy as a basis for cooperative spatial interaction

    Get PDF
    Boukricha H, Nguyen N, Wachsmuth I. Sharing emotions and space - empathy as a basis for cooperative spatial interaction. In: Kopp S, Marsella S, Thorisson K, Vilhjalmsson HH, eds. Proceedings of the 11th International Conference on Intelligent Virtual Agents (IVA 2011). LNAI. Vol 6895. Berlin, Heidelberg: Springer; 2011: 350-362.Empathy is believed to play a major role as a basis for humans’ cooperative behavior. Recent research shows that humans empathize with each other to different degrees depending on several modulation factors including, among others, their social relationships, their mood, and the situational context. In human spatial interaction, partners share and sustain a space that is equally and exclusively reachable to them, the so-called interaction space. In a cooperative interaction scenario of relocating objects in interaction space, we introduce an approach for triggering and modulating a virtual humans cooperative spatial behavior by its degree of empathy with its interaction partner. That is, spatial distances like object distances as well as distances of arm and body movements while relocating objects in interaction space are modulated by the virtual human’s degree of empathy. In this scenario, the virtual human’s empathic emotion is generated as a hypothesis about the partner’s emotional state as related to the physical effort needed to perform a goal directed spatial behavior

    INSPIRE Newsletter Spring 2022

    Get PDF
    https://scholarsmine.mst.edu/inspire-newsletters/1010/thumbnail.jp

    Extending semantic long-term knowledge on the basis of episodic short-term knowledge

    Get PDF
    Voss I, Wachsmuth I. Extending semantic long-term knowledge on the basis of episodic short-term knowledge. In: Schmalhofer F, Young RM, Katz G, eds. Proceedings of the EuroCogSci03. Mahwah, NJ, USA: Lawrence Erlbaum Associates; 2003: 445-445

    Thirteenth Biennial Status Report: April 2015 - February 2017

    No full text

    Social behavior modeling based on Incremental Discrete Hidden Markov Models

    No full text
    12 pagesInternational audienceModeling multimodal face-to-face interaction is a crucial step in the process of building social robots or users-aware Embodied Conversational Agents (ECA). In this context, we present a novel approach for human behavior analysis and generation based on what we called "Incremental Discrete Hidden Markov Model" (IDHMM). Joint multimodal activities of interlocutors are first modeled by a set of DHMMs that are specific to supposed joint cognitive states of the interlocutors. Respecting a task-specific syntax, the IDHMM is then built from these DHMMs and split into i) a recognition model that will determine the most likely sequence of cognitive states given the multimodal activity of the in- terlocutor, and ii) a generative model that will compute the most likely activity of the speaker given this estimated sequence of cognitive states. Short-Term Viterbi (STV) decoding is used to incrementally recognize and generate behav- ior. The proposed model is applied to parallel speech and gaze data of interact- ing dyads

    Modeling Perception-Action Loops: Comparing Sequential Models with Frame-Based Classifiers

    No full text
    International audienceModeling multimodal perception-action loops in face-to-face interactions is a crucial step in the process of building sensory-motor behaviors for social robots or users-aware Embodied Conversational Agents (ECA). In this paper, we compare trainable behavioral models based on sequential models (HMMs) and classifiers (SVMs and Decision Trees) inherently inappropriate to model sequential aspects. These models aim at giving pertinent perception/action skills for robots in order to generate optimal actions given the perceived actions of others and joint goals. We applied these models to parallel speech and gaze data collected from interacting dyads. The challenge was to predict the gaze of one subject given the gaze of the interlocutor and the voice activity of both. We show that Incremental Discrete HMM (IDHMM) generally outperforms classifiers and that injecting input context in the modeling process significantly improves the performances of all algorithms

    Kommunikation und Körper (Embodied Communication)

    Get PDF
    corecore