43,790 research outputs found

    Deep Active Learning for Dialogue Generation

    Full text link
    We propose an online, end-to-end, neural generative conversational model for open-domain dialogue. It is trained using a unique combination of offline two-phase supervised learning and online human-in-the-loop active learning. While most existing research proposes offline supervision or hand-crafted reward functions for online reinforcement, we devise a novel interactive learning mechanism based on hamming-diverse beam search for response generation and one-character user-feedback at each step. Experiments show that our model inherently promotes the generation of semantically relevant and interesting responses, and can be used to train agents with customized personas, moods and conversational styles.Comment: Accepted at 6th Joint Conference on Lexical and Computational Semantics (*SEM) 2017 (Previously titled "Online Sequence-to-Sequence Active Learning for Open-Domain Dialogue Generation" on ArXiv

    The Virtual Tutor: Tasks for conversational agents in Online Collaborative Learning Environments

    Get PDF
    Online collaborative learning environments are becoming increasingly popular in higher education. E-tutors need to supervise, guide students and look out for conflicts within the online environment to ensure a successful learning experience. Web-based platforms allow for interactive elements such as conversational agents to disencumber the e-tutor. Repeatable tasks, which do not require a human response, can be automatized by these systems. The aim of this study is to identify and synthesize the tasks an e-tutor has and to investigate the automatisation potential with conversational agents. Using a design science research approach a literature review is conducted, identifying 13 tasks. Subsequently, a matrix is established, contrasting the tasks with requirements for the use of conversational agents. Furthermore, a virtual tutor framework is developed, clarifying the agent type selection, the technical structure and components for a prototype development in an online collaborative learning environment

    A Contextualized Real-Time Multimodal Emotion Recognition for Conversational Agents using Graph Convolutional Networks in Reinforcement Learning

    Full text link
    Owing to the recent developments in Generative Artificial Intelligence (GenAI) and Large Language Models (LLM), conversational agents are becoming increasingly popular and accepted. They provide a human touch by interacting in ways familiar to us and by providing support as virtual companions. Therefore, it is important to understand the user's emotions in order to respond considerately. Compared to the standard problem of emotion recognition, conversational agents face an additional constraint in that recognition must be real-time. Studies on model architectures using audio, visual, and textual modalities have mainly focused on emotion classification using full video sequences that do not provide online features. In this work, we present a novel paradigm for contextualized Emotion Recognition using Graph Convolutional Network with Reinforcement Learning (conER-GRL). Conversations are partitioned into smaller groups of utterances for effective extraction of contextual information. The system uses Gated Recurrent Units (GRU) to extract multimodal features from these groups of utterances. More importantly, Graph Convolutional Networks (GCN) and Reinforcement Learning (RL) agents are cascade trained to capture the complex dependencies of emotion features in interactive scenarios. Comparing the results of the conER-GRL model with other state-of-the-art models on the benchmark dataset IEMOCAP demonstrates the advantageous capabilities of the conER-GRL architecture in recognizing emotions in real-time from multimodal conversational signals.Comment: 5 pages (4 main + 1 reference), 2 figures. Submitted to IEEE FG202

    Evorus: A Crowd-powered Conversational Assistant Built to Automate Itself Over Time

    Full text link
    Crowd-powered conversational assistants have been shown to be more robust than automated systems, but do so at the cost of higher response latency and monetary costs. A promising direction is to combine the two approaches for high quality, low latency, and low cost solutions. In this paper, we introduce Evorus, a crowd-powered conversational assistant built to automate itself over time by (i) allowing new chatbots to be easily integrated to automate more scenarios, (ii) reusing prior crowd answers, and (iii) learning to automatically approve response candidates. Our 5-month-long deployment with 80 participants and 281 conversations shows that Evorus can automate itself without compromising conversation quality. Crowd-AI architectures have long been proposed as a way to reduce cost and latency for crowd-powered systems; Evorus demonstrates how automation can be introduced successfully in a deployed system. Its architecture allows future researchers to make further innovation on the underlying automated components in the context of a deployed open domain dialog system.Comment: 10 pages. To appear in the Proceedings of the Conference on Human Factors in Computing Systems 2018 (CHI'18

    The use of animated agents in eā€learning environments: an exploratory, interpretive case study

    Get PDF
    There is increasing interest in the use of animated agents in eā€learning environments. However, empirical investigations of their use in online education are limited. Our aim is to provide an empirically based framework for the development and evaluation of animated agents in eā€learning environments. Findings suggest a number of challenges, including the multiple dialogue models that animated agents will need to accommodate, the diverse range of roles that pedagogical animated agents can usefully support, the dichotomous relationship that emerges between these roles and that of the lecturer, and student perception of the degree of autonomy that can be afforded to animated agents
    • ā€¦
    corecore