92 research outputs found

    Difference of Convex Functions Programming Applied to Control with Expert Data

    Get PDF
    This paper reports applications of Difference of Convex functions (DC) programming to Learning from Demonstrations (LfD) and Reinforcement Learning (RL) with expert data. This is made possible because the norm of the Optimal Bellman Residual (OBR), which is at the heart of many RL and LfD algorithms, is DC. Improvement in performance is demonstrated on two specific algorithms, namely Reward-regularized Classification for Apprenticeship Learning (RCAL) and Reinforcement Learning with Expert Demonstrations (RLED), through experiments on generic Markov Decision Processes (MDP), called Garnets

    Is the Bellman residual a bad proxy?

    Get PDF
    This paper aims at theoretically and empirically comparing two standard optimization criteria for Reinforcement Learning: i) maximization of the mean value and ii) minimization of the Bellman residual. For that purpose, we place ourselves in the framework of policy search algorithms, that are usually designed to maximize the mean value, and derive a method that minimizes the residual ∄T∗vπ−vπ∄1,Îœ\|T_* v_\pi - v_\pi\|_{1,\nu} over policies. A theoretical analysis shows how good this proxy is to policy optimization, and notably that it is better than its value-based counterpart. We also propose experiments on randomly generated generic Markov decision processes, specifically designed for studying the influence of the involved concentrability coefficient. They show that the Bellman residual is generally a bad proxy to policy optimization and that directly maximizing the mean value is much better, despite the current lack of deep theoretical analysis. This might seem obvious, as directly addressing the problem of interest is usually better, but given the prevalence of (projected) Bellman residual minimization in value-based reinforcement learning, we believe that this question is worth to be considered.Comment: Final NIPS 2017 version (title, among other things, changed

    Imitation Learning Applied to Embodied Conversational Agents

    Get PDF
    International audienceEmbodied Conversational Agents (ECAs) are emerging as a key component to allow human interact with machines. Applications are numerous and ECAs can reduce the aversion to interact with a machine by providing user-friendly interfaces. Yet, ECAs are still unable to produce social signals appropriately during their interaction with humans, which tends to make the interaction less instinctive. Especially, very little attention has been paid to the use of laughter in human-avatar interactions despite the crucial role played by laughter in human-human interaction. In this paper, methods for predicting when and how to laugh during an interaction for an ECA are proposed. Different Imitation Learning (also known as Apprenticeship Learning) algorithms are used in this purpose and a regularized classification algorithm is shown to produce good behavior on real data

    Boosted Bellman Residual Minimization Handling Expert Demonstrations

    No full text
    International audienceThis paper addresses the problem of batch Reinforcement Learning with Expert Demonstrations (RLED). In RLED, the goal is to find an optimal policy of a Markov Decision Process (MDP), using a data set of fixed sampled transitions of the MDP as well as a data set of fixed expert demonstrations. This is slightly different from the batch Reinforcement Learning (RL) framework where only fixed sampled transitions of the MDP are available. Thus, the aim of this article is to propose algorithms that leverage those expert data. The idea proposed here differs from the Approximate Dynamic Programming methods in the sense that we minimize the Optimal Bellman Residual (OBR), where the minimization is guided by constraints defined by the expert demonstrations. This choice is motivated by the the fact that controlling the OBR implies controlling the distance between the estimated and optimal quality functions. However, this method presents some difficulties as the criterion to minimize is non-convex, non-differentiable and biased. Those difficulties are overcome via the embedding of distributions in a Reproducing Kernel Hilbert Space (RKHS) and a boosting technique which allows obtaining non-parametric algorithms. Finally, our algorithms are compared to the only state of the art algorithm, Approximate Policy Iteration with Demonstrations (APID) algorithm, in different experimental settings

    Predicting when to laugh with structured classification

    No full text
    International audienceToday, Embodied Conversational Agents (ECAs) are emerging as natural media to interact with machines. Applications are numerous and ECAs can reduce the technological gap between people by providing user-friendly interfaces. Yet, ECAs are still unable to produce social signals appropriately during their interaction with humans, which tends to make the interaction less instinctive. Especially, very little attention has been paid to the use of laughter in human-avatar interactions despite the crucial role played by laughter in human-human interaction. In this paper, a method for predicting the most appropriate moment for laughing for an ECA is proposed. Imitation learning via a structured classification algorithm is used in this purpose and is shown to produce a behavior similar to humans’ on a practical application: the yes/no game

    Apprentissage par démonstrations : vaut-il la peine d'estimer une fonction de récompense?

    No full text
    Cet article propose une Ă©tude comparative entre l'Apprentissage par Renforcement Inverse (ARI) et l'Apprentissage par Imitation (AI). L'ARI et l'AI sont deux cadres de travail qui utilisent le concept de Processus DĂ©cisionnel de Markov (PDM) et dans lesquels nous cherchons Ă  rĂ©soudre le problĂšme d'Apprentissage par DĂ©monstrations (AD). L'AD est un problĂšme oĂč un agent appelĂ© ap- prenti cherche Ă  apprendre Ă  partir de l'observation des dĂ©monstrations d'un autre agent appelĂ© expert. Dans le cadre de travail de l'AI, l'apprenti essaie d'apprendre directement la politique de l'expert alors que dans le cadre de l'ARI, l'apprenti essaie d'apprendre la rĂ©compense qui explique la politique de l'expert. Cette rĂ©compense est ensuite optimisĂ©e pour imiter l'expert. On peut donc lĂ©gitimement se demander s'il y a un intĂ©rĂȘt Ă  estimer une rĂ©compense qui devra ensuite ĂȘtre optimisĂ©e ou si l'estima- tion d'une politique est suffisante. Cette question assez naturelle n'a pas encore Ă©tĂ© rĂ©ellement traitĂ©e dans la littĂ©rature pour l'instant. Ici, des rĂ©ponses partielles Ă  la fois d'un point de vue thĂ©orique et pra- tique sont produites. Mots-clĂ©s : Apprentissage par Renforcement Inverse, Apprentissage par Imitation, Apprentissage par DĂ©monstrations

    Learning from Demonstrations: Is It Worth Estimating a Reward Function?

    Get PDF
    International audienceThis paper provides a comparative study between Inverse Reinforcement Learning (IRL) and Apprenticeship Learning (AL). IRL and AL are two frameworks, using Markov Decision Processes (MDP), which are used for the imitation learning problem where an agent tries to learn from demonstrations of an expert. In the AL Framework, the agent tries to learn the expert policy whereas in the IRL Framework, the agent tries to learn a reward which can explain the behavior of the expert. This reward is then optimized to imitate the expert. One can wonder if it is worth estimating such a reward, or if estimating a Policy is sufficient. This quite natural question has not really been addressed in the literature right now. We provide partial answers, both from a theoretical and empirical point of view

    End-to-end optimization of goal-driven and visually grounded dialogue systems

    Get PDF
    End-to-end design of dialogue systems has recently become a popular research topic thanks to powerful tools such as encoder-decoder architectures for sequence-to-sequence learning. Yet, most current approaches cast human-machine dialogue management as a supervised learning problem, aiming at predicting the next utterance of a participant given the full history of the dialogue. This vision is too simplistic to render the intrinsic planning problem inherent to dialogue as well as its grounded nature, making the context of a dialogue larger than the sole history. This is why only chit-chat and question answering tasks have been addressed so far using end-to-end architectures. In this paper, we introduce a Deep Reinforcement Learning method to optimize visually grounded task-oriented dialogues, based on the policy gradient algorithm. This approach is tested on a dataset of 120k dialogues collected through Mechanical Turk and provides encouraging results at solving both the problem of generating natural dialogues and the task of discovering a specific object in a complex picture
    • 

    corecore