1,659 research outputs found

    Dialogue manager domain adaptation using Gaussian process reinforcement learning

    Get PDF
    Spoken dialogue systems allow humans to interact with machines using natural speech. As such, they have many benefits. By using speech as the primary communication medium, a computer interface can facilitate swift, human-like acquisition of information. In recent years, speech interfaces have become ever more popular, as is evident from the rise of personal assistants such as Siri, Google Now, Cortana and Amazon Alexa. Recently, data-driven machine learning methods have been applied to dialogue modelling and the results achieved for limited-domain applications are comparable to or outperform traditional approaches. Methods based on Gaussian processes are particularly effective as they enable good models to be estimated from limited training data. Furthermore, they provide an explicit estimate of the uncertainty which is particularly useful for reinforcement learning. This article explores the additional steps that are necessary to extend these methods to model multiple dialogue domains. We show that Gaussian process reinforcement learning is an elegant framework that naturally supports a range of methods, including prior knowledge, Bayesian committee machines and multi-agent learning, for facilitating extensible and adaptable dialogue systems.Engineering and Physical Sciences Research Council (Grant ID: EP/M018946/1 ”Open Domain Statistical Spoken Dialogue Systems”

    Policy committee for adaptation in multi-domain spoken dialogue systems

    Get PDF
    Moving from limited-domain dialogue systems to open domain dialogue systems raises a number of challenges. One of them is the ability of the system to utilise small amounts of data from disparate domains to build a dialogue manager policy. Previous work has focused on using data from different domains to adapt a generic policy to a specific domain. Inspired by Bayesian committee machines, this paper proposes the use of a committee of dialogue policies. The results show that such a model is particularly beneficial for adaptation in multi-domain dialogue systems. The use of this model significantly improves performance compared to a single policy baseline, as confirmed by the performed real-user trial. This is the first time a dialogue policy has been trained on multiple domains on-line in interaction with real users.The research leading to this work was funded by the EPSRC grant EP/M018946/1 ”Open Domain Statistical Spoken Dialogue Systems”.This is the author accepted manuscript. The final version is available from IEEE via http://dx.doi.org/10.1109/ASRU.2015.740487

    Building multi-domain conversational systems from single domain resources

    Get PDF
    Current Advances In The Development Of Mobile And Smart Devices Have Generated A Growing Demand For Natural Human-Machine Interaction And Favored The Intelligent Assistant Metaphor, In Which A Single Interface Gives Access To A Wide Range Of Functionalities And Services. Conversational Systems Constitute An Important Enabling Technology In This Paradigm. However, They Are Usually Defined To Interact In Semantic-Restricted Domains In Which Users Are Offered A Limited Number Of Options And Functionalities. The Design Of Multi-Domain Systems Implies That A Single Conversational System Is Able To Assist The User In A Variety Of Tasks. In This Paper We Propose An Architecture For The Development Of Multi-Domain Conversational Systems That Allows: (1) Integrating Available Multi And Single Domain Speech Recognition And Understanding Modules, (2) Combining Available System In The Different Domains Implied So That It Is Not Necessary To Generate New Expensive Resources For The Multi-Domain System, (3) Achieving Better Domain Recognition Rates To Select The Appropriate Interaction Management Strategies. We Have Evaluated Our Proposal Combining Three Systems In Different Domains To Show That The Proposed Architecture Can Satisfactory Deal With Multi-Domain Dialogs. (C) 2017 Elsevier B.V. All Rights Reserved.Work partially supported by projects MINECO TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02

    Scaling up deep reinforcement learning for multi-domain dialogue systems

    Get PDF
    Standard deep reinforcement learning methods such as Deep Q-Networks (DQN) for multiple tasks (domains) face scalability problems due to large search spaces. This paper proposes a three-stage method for multi-domain dialogue policy learning—termed NDQN, and applies it to an information-seeking spoken dialogue system in the domains of restaurants and hotels. In this method, the first stage does multi-policy learning via a network of DQN agents; the second makes use of compact state representations by compressing raw inputs; and the third stage applies a pre-training phase for bootstraping the behaviour of agents in the network. Experimental results comparing DQN (baseline) versus NDQN (proposed) using simulations report that the proposed method exhibits better scalability and is promising for optimising the behaviour of multi-domain dialogue systems. An additional evaluation reports that the NDQN agents outperformed a K-Nearest Neighbour baseline in task success and dialogue length, yielding more efficient and successful dialogues
    corecore