27 research outputs found

    Personalizing Dialogue Agents via Meta-Learning

    Full text link
    Existing personalized dialogue models use human designed persona descriptions to improve dialogue consistency. Collecting such descriptions from existing dialogues is expensive and requires hand-crafted feature designs. In this paper, we propose to extend Model-Agnostic Meta-Learning (MAML)(Finn et al., 2017) to personalized dialogue learning without using any persona descriptions. Our model learns to quickly adapt to new personas by leveraging only a few dialogue samples collected from the same user, which is fundamentally different from conditioning the response on the persona descriptions. Empirical results on Persona-chat dataset (Zhang et al., 2018) indicate that our solution outperforms non-meta-learning baselines using automatic evaluation metrics, and in terms of human-evaluated fluency and consistency.Comment: Accepted in ACL 2019. Zhaojiang Lin* and Andrea Madotto* contributed equally to this wor

    Learning Personalized End-to-End Goal-Oriented Dialog

    Full text link
    Most existing works on dialog systems only consider conversation content while neglecting the personality of the user the bot is interacting with, which begets several unsolved issues. In this paper, we present a personalized end-to-end model in an attempt to leverage personalization in goal-oriented dialogs. We first introduce a Profile Model which encodes user profiles into distributed embeddings and refers to conversation history from other similar users. Then a Preference Model captures user preferences over knowledge base entities to handle the ambiguity in user requests. The two models are combined into the Personalized MemN2N. Experiments show that the proposed model achieves qualitative performance improvements over state-of-the-art methods. As for human evaluation, it also outperforms other approaches in terms of task completion rate and user satisfaction.Comment: Accepted by AAAI 201

    Identifying users' domain expertise from dialogues

    Get PDF

    Listening between the Lines: Learning Personal Attributes from Conversations

    Full text link
    Open-domain dialogue agents must be able to converse about many topics while incorporating knowledge about the user into the conversation. In this work we address the acquisition of such knowledge, for personalization in downstream Web applications, by extracting personal attributes from conversations. This problem is more challenging than the established task of information extraction from scientific publications or Wikipedia articles, because dialogues often give merely implicit cues about the speaker. We propose methods for inferring personal attributes, such as profession, age or family status, from conversations using deep learning. Specifically, we propose several Hidden Attribute Models, which are neural networks leveraging attention mechanisms and embeddings. Our methods are trained on a per-predicate basis to output rankings of object values for a given subject-predicate combination (e.g., ranking the doctor and nurse professions high when speakers talk about patients, emergency rooms, etc). Experiments with various conversational texts including Reddit discussions, movie scripts and a collection of crowdsourced personal dialogues demonstrate the viability of our methods and their superior performance compared to state-of-the-art baselines.Comment: published in WWW'1

    End-to-End Goal-Oriented Conversational Agent for Risk Awareness

    Get PDF
    Traditional development of goal-oriented conversational agents typically require a lot of domain-specific handcrafting, which precludes scaling up to different domains; end-to-end systems would escape this limitation because they can be trained directly from dialogues. The very promising success recently obtained in end-to-end chatbots development could carry over to goal-oriented settings: applying deep learning models for building robust and scalable goal-oriented dialog systems directly from corpora of conversations is a challenging task and an open research area. For this reason, I decided that it would have been more relevant in the context of a master's thesis to experiment and get acquainted with these new promising methodologies - although not yet ready for production - rather than investing time in hand-crafting dialogue rules for a domain-specific solution. My thesis work had the following macro objectives: (i) investigate the latest research works concerning goal-oriented conversational agents development; (ii) choose a reference study, understand it and implement it with an appropriate technology; (iii) apply what learnt to a particular domain of interest. As a reference framework I chose the end-to-end memory networks (MemN2N) (Sukhbaatar et al., 2015) because it has proven to be particularly promising and has been used as a baseline for many recent works. Not having real dialogues available for training though, I took care of synthetically generating a corpora of conversations, taking a cue from the Dialog bAbI dataset for restaurant reservations (Bordes et al., 2016) and adapting it to the new domain of interest of risk awareness. Finally, I built a simple prototype which exploited the pre-trained dialog model in order to advise users about risk through an anthropomorphic talking avatar interface
    corecore