266 research outputs found

    Agents for educational games and simulations

    Get PDF
    This book consists mainly of revised papers that were presented at the Agents for Educational Games and Simulation (AEGS) workshop held on May 2, 2011, as part of the Autonomous Agents and MultiAgent Systems (AAMAS) conference in Taipei, Taiwan. The 12 full papers presented were carefully reviewed and selected from various submissions. The papers are organized topical sections on middleware applications, dialogues and learning, adaption and convergence, and agent applications

    Managing agent's impression based on user's engagement detection

    Get PDF
    When interacting with others, we form an impression that can be declined along the two psychological dimensions of warmth and competence. By managing them, high level of engagement in an interaction can be maintained and reinforced. Our aim is to develop a virtual agent that can form and maintain a positive impression on the user that can help in improving the quality of the interaction and the user's experience. In this paper, we present an interactive system in which a virtual agent adopts a dynamic communication strategy during the interaction with a user, aiming at forming and maintaining a positive impression of warmth and competence. The agent continuously analyzes user's non-verbal signals to determine user's engagement level and adapts its communication strategy accordingly. We present a study in which we manipulate the communication strategy of the agent and we measure user's experience and user's perception of the agent's warmth and competence

    Managing an agent's self-presentational strategies during an interaction

    Get PDF
    In this paper we present a computational model for managing the impressions of warmth and competence (the two fundamental dimensions of social cognition) of an Embodied Conversational Agent (ECA) while interacting with a human. The ECA can choose among four different self-presentational strategies eliciting different impressions of warmth and/or competence in the user, through its verbal and non-verbal behavior. The choice of the non-verbal behaviors displayed by the ECA relies on our previous studies. In our first study, we annotated videos of human-human natural interactions of an expert on a given topic talking to a novice, in order to find associations between the warmth and competence elicited by the expert's non-verbal behaviors (such as type of gestures, arms rest poses, smiling). In a second study, we investigated whether the most relevant non-verbal cues found in the previous study were perceived in the same way when displayed by an ECA. The computational learning model presented in this paper aims to learn in real-time the best strategy (i.e., the degree of warmth and/or competence to display) for the ECA, that is, the one which maximizes user's engagement during the interaction. We also present an evaluation study, aiming to investigate our model in a real context. In the experimental scenario, the ECA plays the role of a museum guide introducing an exposition about video games. We collected data from 75 visitors of a science museum. The ECA was displayed in human dimension on a big screen in front of the participant, with a Kinect on the top. During the interaction, the ECA could adopt one of 4 self-presentational strategies during the whole interaction, or it could select one strategy randomly for each speaking turn, or it could use a reinforcement learning algorithm to choose the strategy having the highest reward (i.e., user's engagement) after each speaking turn

    Emotions, behaviour and belief regulation in an intelligent guide with attitude

    Get PDF
    Abstract unavailable please refer to PD

    Designing a Chatbot Social Cue Configuration System

    Get PDF
    Social cues (e.g., gender, age) are important design features of chatbots. However, choosing a social cue design is challenging. Although much research has empirically investigated social cues, chatbot engineers have difficulties to access this knowledge. Descriptive knowledge is usually embedded in research articles and difficult to apply as prescriptive knowledge. To address this challenge, we propose a chatbot social cue configuration system that supports chatbot engineers to access descriptive knowledge in order to make justified social cue design decisions (i.e., grounded in empirical research). We derive two design principles that describe how to extract and transform descriptive knowledge into a prescriptive and machine-executable representation. In addition, we evaluate the prototypical instantiations in an exploratory focus group and at two practitioner symposia. Our research addresses a contemporary problem and contributes with a generalizable concept to support researchers as well as practitioners to leverage existing descriptive knowledge in the design of artifacts

    Game-inspired Pedagogical Conversational Agents: A Systematic Literature Review

    Get PDF
    Pedagogical conversational agents (PCAs) are an innovative way to help learners improve their academic performance via intelligent dialog systems. However, PCAs have not yet reached their full potential. They often fail because users perceive conversations with them as not engaging. Enriching them with game-based approaches could contribute to mitigating this issue. One could enrich a PCA with game-based approaches by gamifying it to foster positive effects, such as fun and motivation, or by integrating it into a game-based learning (GBL) environment to promote effects such as social presence and enable individual learning support. We summarize PCAs that are combined with game-based approaches under the novel term “game-inspired PCAs”. We conducted a systematic literature review on this topic, as previous literature reviews on PCAs either have not combined the topics of PCAs and GBL or have done so to a limited extent only. We analyzed the literature regarding the existing design knowledge base, the game elements used, the thematic areas and target groups, the PCA roles and types, the extent of artificial intelligence (AI) usage, and opportunities for adaptation. We reduced the initial 3,034 records to 50 fully coded papers, from which we derived a morphological box and revealed current research streams and future research recommendations. Overall, our results show that the topic offers promising application potential but that scholars and practitioners have not yet considered it holistically. For instance, we found that researchers have rarely provided prescriptive design knowledge, have not sufficiently combined game elements, and have seldom used AI algorithms as well as intelligent possibilities of user adaptation in PCA development. Furthermore, researchers have scarcely considered certain target groups, thematic areas, and PCA roles. Consequently, our paper contributes to research and practice by addressing research gaps and structuring the existing knowledge base

    Enhancing computer-human interaction with animated facial expressions

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Architecture, 1991.Includes bibliographical references (leaves 87-93).by Brent Cabot James Britton.M.S

    From Verbs to Tasks: An Integrated Account of Learning Tasks from Situated Interactive Instruction.

    Full text link
    Intelligent collaborative agents are becoming common in the human society. From virtual assistants such as Siri and Google Now to assistive robots, they contribute to human activities in a variety of ways. As they become more pervasive, the challenge of customizing them to a variety of environments and tasks becomes critical. It is infeasible for engineers to program them for each individual use. Our research aims at building interactive robots and agents that adapt to new environments autonomously by interacting with human users using natural modalities. This dissertation studies the problem of learning novel tasks from human-agent dialog. We propose a novel approach for interactive task learning, situated interactive instruction (SII), and investigate approaches to three computational challenges that arise in designing SII agents: situated comprehension, mixed-initiative interaction, and interactive task learning. We propose a novel mixed-modality grounded representation for task verbs which encompasses their lexical, semantic, and task-oriented aspects. This representation is useful in situated comprehension and can be learned through human-agent interactions. We introduce the Indexical Model of comprehension that can exploit extra-linguistic contexts for resolving semantic ambiguities in situated comprehension of task commands. The Indexical model is integrated with a mixed-initiative interaction model that facilitates a flexible task-oriented human-agent dialog. This dialog serves as the basis of interactive task learning. We propose an interactive variation of explanation-based learning that can acquire the proposed representation. We demonstrate that our learning paradigm is efficient, can transfer knowledge between structurally similar tasks, integrates agent-driven exploration with instructional learning, and can acquire several tasks. The methods proposed in this thesis are integrated in Rosie - a generally instructable agent developed in the Soar cognitive architecture and embodied on a table-top robot.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/111573/1/shiwali_1.pd
    corecore