93 research outputs found

    Learning dialogue POMDP model components from expert dialogues

    Get PDF
    Un système de dialogue conversationnel doit aider les utilisateurs humains à atteindre leurs objectifs à travers des dialogues naturels et efficients. C'est une tache toutefois difficile car les langages naturels sont ambiguës et incertains, de plus le système de reconnaissance vocale (ASR) est bruité. À cela s'ajoute le fait que l'utilisateur humain peut changer son intention lors de l'interaction avec la machine. Dans ce contexte, l'application des processus décisionnels de Markov partiellement observables (POMDPs) au système de dialogue conversationnel nous a permis d'avoir un cadre formel pour représenter explicitement les incertitudes, et automatiser la politique d'optimisation. L'estimation des composantes du modelé d'un POMDP-dialogue constitue donc un défi important, car une telle estimation a un impact direct sur la politique d'optimisation du POMDP-dialogue. Cette thèse propose des méthodes d'apprentissage des composantes d'un POMDPdialogue basées sur des dialogues bruités et sans annotation. Pour cela, nous présentons des méthodes pour apprendre les intentions possibles des utilisateurs à partir des dialogues, en vue de les utiliser comme états du POMDP-dialogue, et l'apprendre un modèle du maximum de vraisemblance à partir des données, pour transition du POMDP. Car c'est crucial de réduire la taille d'état d'observation, nous proposons également deux modèles d'observation: le modelé mot-clé et le modelé intention. Dans les deux modèles, le nombre d'observations est réduit significativement tandis que le rendement reste élevé, particulièrement dans le modele d'observation intention. En plus de ces composantes du modèle, les POMDPs exigent également une fonction de récompense. Donc, nous proposons de nouveaux algorithmes pour l'apprentissage du modele de récompenses, un apprentissage qui est basé sur le renforcement inverse (IRL). En particulier, nous proposons POMDP-IRL-BT qui fonctionne sur les états de croyance disponibles dans les dialogues du corpus. L'algorithme apprend le modele de récompense par l'estimation du modele de transition de croyance, semblable aux modèles de transition des états dans un MDP (processus décisionnel de Markov). Finalement, nous appliquons les méthodes proposées à un domaine de la santé en vue d'apprendre un POMDP-dialogue et ce essentiellement à partir de dialogues réels, bruités, et sans annotations.Spoken dialogue systems should realize the user intentions and maintain a natural and efficient dialogue with users. This is however a difficult task as spoken language is naturally ambiguous and uncertain, and further the automatic speech recognition (ASR) output is noisy. In addition, the human user may change his intention during the interaction with the machine. To tackle this difficult task, the partially observable Markov decision process (POMDP) framework has been applied in dialogue systems as a formal framework to represent uncertainty explicitly while supporting automated policy solving. In this context, estimating the dialogue POMDP model components is a signifficant challenge as they have a direct impact on the optimized dialogue POMDP policy. This thesis proposes methods for learning dialogue POMDP model components using noisy and unannotated dialogues. Speciffically, we introduce techniques to learn the set of possible user intentions from dialogues, use them as the dialogue POMDP states, and learn a maximum likelihood POMDP transition model from data. Since it is crucial to reduce the observation state size, we then propose two observation models: the keyword model and the intention model. Using these two models, the number of observations is reduced signifficantly while the POMDP performance remains high particularly in the intention POMDP. In addition to these model components, POMDPs also require a reward function. So, we propose new algorithms for learning the POMDP reward model from dialogues based on inverse reinforcement learning (IRL). In particular, we propose the POMDP-IRL-BT algorithm (BT for belief transition) that works on the belief states available in the dialogues. This algorithm learns the reward model by estimating a belief transition model, similar to MDP (Markov decision process) transition models. Ultimately, we apply the proposed methods on a healthcare domain and learn a dialogue POMDP essentially from real unannotated and noisy dialogues

    An Approach for Intention-Driven, Dialogue-Based Web Search

    Get PDF
    Web search engines facilitate the achievement of Web-mediated tasks, including information retrieval, Web page navigation, and online transactions. These tasks often involve goals that pertain to multiple topics, or domains. Current search engines are not suitable for satisfying complex, multi-domain needs due to their lack of interactivity and knowledge. This thesis presents a novel intention-driven, dialogue-based Web search approach that uncovers and combines users\u27 multi-domain goals to provide helpful virtual assistance. The intention discovery procedure uses a hierarchy of Partially Observable Markov Decision Process-based dialogue managers and a backing knowledge base to systematically explore the dialogue\u27s information space, probabilistically refining the perception of user goals. The search approach has been implemented in IDS, a search engine for online gift shopping. A usability study comparing IDS-based searching with Google-based searching found that the IDS-based approach takes significantly less time and effort, and results in higher user confidence in the retrieved results

    Cognitive User Interfaces

    Full text link

    The Dialog State Tracking Challenge Series: A Review

    Get PDF
    In a spoken dialog system, dialog state tracking refers to the task of correctly inferring the state of the conversation -- such as the user's goal -- given all of the dialog history up to that turn.  Dialog state tracking is crucial to the success of a dialog system, yet until recently there were no common resources, hampering progress.  The Dialog State Tracking Challenge series of 3 tasks introduced the first shared testbed and evaluation metrics for dialog state tracking, and has underpinned three key advances in dialog state tracking: the move from generative to discriminative models; the adoption of discriminative sequential techniques; and the incorporation of the speech recognition results directly into the dialog state tracker.  This paper reviews this research area, covering both the challenge tasks themselves and summarizing the work they have enabled

    Strategic Argumentation Dialogues for Persuasion: Framework and Experiments Based on Modelling the Beliefs and Concerns of the Persuadee

    Get PDF
    Persuasion is an important and yet complex aspect of human intelligence. When undertaken through dialogue, the deployment of good arguments, and therefore counterarguments, clearly has a significant effect on the ability to be successful in persuasion. Two key dimensions for determining whether an argument is good in a particular dialogue are the degree to which the intended audience believes the argument and counterarguments, and the impact that the argument has on the concerns of the intended audience. In this paper, we present a framework for modelling persuadees in terms of their beliefs and concerns, and for harnessing these models in optimizing the choice of move in persuasion dialogues. Our approach is based on the Monte Carlo Tree Search which allows optimization in real-time. We provide empirical results of a study with human participants showing that our automated persuasion system based on this technology is superior to a baseline system that does not take the beliefs and concerns into account in its strategy.Comment: The Data Appendix containing the arguments, argument graphs, assignment of concerns to arguments, preferences over concerns, and assignment of beliefs to arguments, is available at the link http://www0.cs.ucl.ac.uk/staff/a.hunter/papers/unistudydata.zip The code is available at https://github.com/ComputationalPersuasion/MCC

    Strategic argumentation dialogues for persuasion: Framework and experiments based on modelling the beliefs and concerns of the persuadee

    Get PDF
    Persuasion is an important and yet complex aspect of human intelligence. When undertaken through dialogue, the deployment of good arguments, and therefore counterarguments, clearly has a significant effect on the ability to be successful in persuasion. Two key dimensions for determining whether an argument is 'good' in a particular dialogue are the degree to which the intended audience believes the argument and counterarguments, and the impact that the argument has on the concerns of the intended audience. In this paper, we present a framework for modelling persuadees in terms of their beliefs and concerns, and for harnessing these models in optimizing the choice of move in persuasion dialogues. Our approach is based on the Monte Carlo Tree Search which allows optimization in real-time. We provide empirical results of a study with human participants that compares an automated persuasion system based on this technology with a baseline system that does not take the beliefs and concerns into account in its strategy

    Improved Intention Discovery with Classified Emotions in A Modified POMDP

    Get PDF
    Emotions are one of the most proactive topics in psychology, a basis of forceful conversation and divergence from the earliest philosophers and other thinkers to the present day. Human emotion classification using different machine learning techniques is an active area of research over the last decade. This investigation discusses a new approach for virtual agents to better understand and interact with the user. Our research focuses on deducing the belief state of a user who interacts with a single agent using recognized emotions from the text/speech based input. We built a customized decision tree with six primary states of emotions being recognized from different sets of inputs. The belief state at each given instance of time slice is inferred by drawing a belief network using the different sets of emotions and calculating state of belief using a POMDP (Partially Observable Markov Decision Process) based solver. Hence the existing POMDP model is customized in order to incorporate emotion as observations for finding the possible user intentions. This helps to overcome the limitations of the present methods to better recognize the belief state. As well, the new approach allows us to analyze human emotional behaviour in indefinite environments and helps to generate an effective interaction between the human and the computer

    Model-based reinforcement learning: A survey

    Get PDF
    Reinforcement learning is an important branch of machine learning and artificial intelligence. Compared with traditional reinforcement learning, model-based reinforcement learning obtains the action of the next state by the model that has been learned, and then optimizes the policy, which greatly improves data efficiency. Based on the present status of research on model-based reinforcement learning at home and abroad, this paper comprehensively reviews the key techniques of model-based reinforcement learning, summarizes the characteristics, advantages and defects of each technology, and analyzes the application of model-based reinforcement learning in games, robotics and brain science
    corecore