6,975 research outputs found

    Socially aware conversational agents

    Get PDF

    “You, Move There!”: Investigating the Impact of Feedback on Voice Control in Virtual Environments

    Get PDF
    Current virtual environment (VEs) input techniques often overlook speech as a useful control modality. Speech could improve interaction in multimodal VEs by enabling users to address objects, locations, and agents, yet research on how to design effective speech for VEs is limited. Our paper investigates the effect of agent feedback on speech VE experiences. Through a lab study, users commanded agents to navigate a VE, receiving either auditory, visual or behavioural feedback. Based on a post interaction semi-structured interview, we find that the type of feedback given by agents is critical to user experience. Specifically auditory mechanisms are preferred, allowing users to engage with other modalities seamlessly during interaction. Although command-like utterances were frequently used, it was perceived as contextually appropriate, ensuring users were understood. Many also found it difficult to discover speech-based functionality. Drawing on these, we discuss key challenges for designing speech input for VEs

    Meetings and Meeting Modeling in Smart Environments

    Get PDF
    In this paper we survey our research on smart meeting rooms and its relevance for augmented reality meeting support and virtual reality generation of meetings in real time or off-line. The research reported here forms part of the European 5th and 6th framework programme projects multi-modal meeting manager (M4) and augmented multi-party interaction (AMI). Both projects aim at building a smart meeting environment that is able to collect multimodal captures of the activities and discussions in a meeting room, with the aim to use this information as input to tools that allow real-time support, browsing, retrieval and summarization of meetings. Our aim is to research (semantic) representations of what takes place during meetings in order to allow generation, e.g. in virtual reality, of meeting activities (discussions, presentations, voting, etc.). Being able to do so also allows us to look at tools that provide support during a meeting and at tools that allow those not able to be physically present during a meeting to take part in a virtual way. This may lead to situations where the differences between real meeting participants, human-controlled virtual participants and (semi-) autonomous virtual participants disappear

    Computational models of social and emotional turn-taking for embodied conversational agents: a review

    Get PDF
    The emotional involvement of participants in a conversation not only shows in the words they speak and in the way they speak and gesture but also in their turn-taking behavior. This paper reviews research into computational models of embodied conversational agents. We focus on models for turn-taking management and (social) emotions. We are particularly interested in how in these models emotions of the agent itself and those of the others in uence the agent's turn-taking behavior and vice versa how turn-taking behavior of the partner is perceived by the agent itself. The system of turn-taking rules presented by Sacks, Schegloff and Jefferson (1974) is often a starting point for computational turn-taking models of conversational agents. But emotions have their own rules besides the "one-at-a-time" paradigm of the SSJ system. It turns out that almost without exception computational models of turn-taking behavior that allow "continuous interaction" and "natural turntaking" do not model the underlying psychological, affective, attentional and cognitive processes. They are restricted to rules in terms of a number of supercially observable cues. On the other hand computational models for virtual humans that are based on a functional theory of social emotion do not contain explicit rules on how social emotions affect turn-taking behavior or how the emotional state of the agent is affected by turn-taking behavior of its interlocutors. We conclude with some preliminary ideas on what an architecture for emotional turn-taking should look like and we discuss the challenges in building believable emotional turn-taking agents

    Collaborative Virtual Training with Physical and Communicative Autonomous Agents

    Get PDF
    International audienceVirtual agents are a real asset in collaborative virtual environment for training (CVET) as they can replace missing team members. Collaboration between such agents and users, however, is generally limited. We present here a whole integrated model of CVET focusing on the abstraction of the real or virtual nature of the actor to define a homogenous collaboration model. First, we define a new collaborative model of interaction. This model notably allows to abstract the real or virtual nature of a teammate. Moreover, we propose a new role exchange approach so that actors can swap their roles during training. The model also permits the use of physically based objects and characters animation to increase the realism of the world. Second, we design a new communicative agent model, which aims at improving collaboration with other actors using dialog to coordinate their actions and to share their knowledge. Finally, we evaluated the proposed model to estimate the resulting benefits for the users and we show that this is integrated in existing CVET applications

    Agents for educational games and simulations

    Get PDF
    This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications
    • 

    corecore