9 research outputs found

    Neural Fitted Actor-Critic

    Get PDF
    International audienceA novel reinforcement learning algorithm that deals with both continuous state and action spaces is proposed. Domain knowledge requirements are kept minimal by using non-linear estimators and since the algorithm does not need prior trajectories or known goal states. The new actor-critic algorithm is on-policy, offline and model-free. It considers discrete time, stationary policies, and maximizes the discounted sum of rewards. Experimental results on two common environments, showing the good performance of the proposed algorithm, are presented

    Off-Policy Neural Fitted Actor-Critic

    Get PDF
    International audienceA new off-policy, offline, model-free, actor-critic reinforcement learning algorithm dealing with continuous environments in both states and actions is presented. It addresses discrete time problems where the goal is to maximize the discounted sum of rewards using stationary policies. Our algorithm allows to trade-off between data-efficiency and scalability. The amount of a priori knowledge is kept low by: (1) using neural networks to learn both the critic and the actor, (2) not relying on initial trajectories provided by an expert, and (3) not depending on known goal states. Experimental results compare data-efficiency to 4 state-of-the-art algorithms on three benchmark environments. This article largely reproduces a previous work [34] by adding a higher dimensional environment, improving control architectures and provides batch normalization for others state-of-the-art algorithms

    Exploiting the Sign of the Advantage Function to Learn Deterministic Policies in Continuous Domains

    Full text link
    In the context of learning deterministic policies in continuous domains, we revisit an approach, which was first proposed in Continuous Actor Critic Learning Automaton (CACLA) and later extended in Neural Fitted Actor Critic (NFAC). This approach is based on a policy update different from that of deterministic policy gradient (DPG). Previous work has observed its excellent performance empirically, but a theoretical justification is lacking. To fill this gap, we provide a theoretical explanation to motivate this unorthodox policy update by relating it to another update and making explicit the objective function of the latter. We furthermore discuss in depth the properties of these updates to get a deeper understanding of the overall approach. In addition, we extend it and propose a new trust region algorithm, Penalized NFAC (PeNFAC). Finally, we experimentally demonstrate in several classic control problems that it surpasses the state-of-the-art algorithms to learn deterministic policies.Comment: International Joint Conferences on Artificial Intelligenc

    Exploiting the sign of the advantage function to learn deterministic policies in continuous domains

    Get PDF
    International audienceIn the context of learning deterministic policies in continuous domains, we revisit an approach, which was first proposed in Continuous Actor Critic Learning Automaton (CACLA) and later extended in Neural Fitted Actor Critic (NFAC). This approach is based on a policy update different from that of deterministic policy gradient (DPG). Previous work has observed its excellent performance empirically, but a theoretical justification is lacking. To fill this gap, we provide a theoretical explanation to motivate this unorthodox policy update by relating it to another update and making explicit the objective function of the latter. We furthermore discuss in depth the properties of these updates to get a deeper understanding of the overall approach. In addition, we extend it and propose a new trust region algorithm, Penalized NFAC (PeNFAC). Finally, we experimentally demonstrate in several classic control problems that it surpasses the state-of-the-art algorithms to learn determinis-tic policies

    Toward a data efficient neural actor-critic

    Get PDF
    International audienceA new off-policy, offline, model-free, actor-critic reinforcement learning algorithm dealing with continuous environments in both states and actions is presented. It addresses discrete time problems where the goal is to maximize the discounted sum of rewards using stationary policies. Our algorithm allows to trade-off between data-efficiency and scalability. The amount of a priori knowledge is kept low by: (1) using neural networks to learn both the critic and the actor, (2) not relying on initial trajectories provided by an expert, and (3) not depending on known goal states. Experimental results show better data-efficiency than 4 state-of-the-art algorithms on two benchmark environments

    Vers des architectures acteur-critique neuronales efficaces en données

    Get PDF
    National audienceUn nouvel algorithme d'apprentissage par renforcement, traitant à la fois des espaces conti-nus d'états et d'actions, est proposé. Il ne nécessite pas de connaître a priori des états buts ou de bonnes trajectoires pour fonctionner. Les besoins en connaissances expertes sur le domaine appliqué sont mi-nimales grâce à l'emploi d'estimateur non-linéaire : des réseaux de neurones. Ce nouvel algorithme est on-policy, hors-ligne, sans modèle de l'environnement. Il produit des politiques stationnaires en temps discret et déterministes en maximisant la somme actualisée des récompenses. Des résultats ex-périmentaux montrant la bonne performance de l'algorithme sont présentés sur deux environnements déterministes : acrobot et cartpole

    Relative Importance Sampling For Off-Policy Actor-Critic in Deep Reinforcement Learning

    Full text link
    Off-policy learning is more unstable compared to on-policy learning in reinforcement learning (RL). One reason for the instability of off-policy learning is a discrepancy between the target (π\pi) and behavior (b) policy distributions. The discrepancy between π\pi and b distributions can be alleviated by employing a smooth variant of the importance sampling (IS), such as the relative importance sampling (RIS). RIS has parameter β[0,1]\beta\in[0, 1] which controls smoothness. To cope with instability, we present the first relative importance sampling-off-policy actor-critic (RIS-Off-PAC) model-free algorithms in RL. In our method, the network yields a target policy (the actor), a value function (the critic) assessing the current policy (π\pi) using samples drawn from behavior policy. We use action value generated from the behavior policy in reward function to train our algorithm rather than from the target policy. We also use deep neural networks to train both actor and critic. We evaluated our algorithm on a number of Open AI Gym benchmark problems and demonstrate better or comparable performance to several state-of-the-art RL baselines

    Apprentissage par renforcement développemental

    No full text
    Reinforcement learning allows an agent to learn a behavior that has never been previously defined by humans. The agent discovers the environment and the different consequences of its actions through its interaction: it learns from its own experience, without having pre-established knowledge of the goals or effects of its actions. This thesis tackles how deep learning can help reinforcement learning to handle continuous spaces and environments with many degrees of freedom in order to solve problems closer to reality. Indeed, neural networks have a good scalability and representativeness. They make possible to approximate functions on continuous spaces and allow a developmental approach, because they require little a priori knowledge on the domain. We seek to reduce the amount of necessary interaction of the agent to achieve acceptable behavior. To do so, we proposed the Neural Fitted Actor-Critic framework that defines several data efficient actor-critic algorithms. We examine how the agent can fully exploit the transitions generated by previous behaviors by integrating off-policy data into the proposed framework. Finally, we study how the agent can learn faster by taking advantage of the development of his body, in particular, by proceeding with a gradual increase in the dimensionality of its sensorimotor spaceL'apprentissage par renforcement permet à un agent d'apprendre un comportement qui n'a jamais été préalablement défini par l'homme. L'agent découvre l'environnement et les différentes conséquences de ses actions à travers des interactions avec celui-ci : il apprend de sa propre expérience, sans avoir de connaissances préétablies des buts ni des effets de ses actions. Cette thèse s'intéresse à la façon dont l'apprentissage profond peut aider l'apprentissage par renforcement à gérer des espaces continus et des environnements ayant de nombreux degrés de liberté dans l'optique de résoudre des problèmes plus proches de la réalité. En effet, les réseaux de neurones ont une bonne capacité de mise à l'échelle et un large pouvoir de représentation. Ils rendent possible l'approximation de fonctions sur un espace continu et permettent de s'inscrire dans une approche développementale nécessitant peu de connaissances a priori sur le domaine. Nous cherchons comment réduire l'expérience nécessaire à l'agent pour atteindre un comportement acceptable. Pour ce faire, nous avons proposé le cadre Neural Fitted Actor-Critic qui définit plusieurs algorithmes acteur-critique efficaces en données. Nous examinons par quels moyens l'agent peut exploiter pleinement les transitions générées par des comportements précédents en intégrant des données off-policy dans le cadre proposé. Finalement, nous étudions de quelle manière l'agent peut apprendre plus rapidement en tirant parti du développement de son corps, en particulier, en procédant par une augmentation progressive de la dimensionnalité de son espace sensorimoteu

    Developmental reinforcement learning

    No full text
    L'apprentissage par renforcement permet à un agent d'apprendre un comportement qui n'a jamais été préalablement défini par l'homme. L'agent découvre l'environnement et les différentes conséquences de ses actions à travers des interactions avec celui-ci : il apprend de sa propre expérience, sans avoir de connaissances préétablies des buts ni des effets de ses actions. Cette thèse s'intéresse à la façon dont l'apprentissage profond peut aider l'apprentissage par renforcement à gérer des espaces continus et des environnements ayant de nombreux degrés de liberté dans l'optique de résoudre des problèmes plus proches de la réalité. En effet, les réseaux de neurones ont une bonne capacité de mise à l'échelle et un large pouvoir de représentation. Ils rendent possible l'approximation de fonctions sur un espace continu et permettent de s'inscrire dans une approche développementale nécessitant peu de connaissances a priori sur le domaine. Nous cherchons comment réduire l'expérience nécessaire à l'agent pour atteindre un comportement acceptable. Pour ce faire, nous avons proposé le cadre Neural Fitted Actor-Critic qui définit plusieurs algorithmes acteur-critique efficaces en données. Nous examinons par quels moyens l'agent peut exploiter pleinement les transitions générées par des comportements précédents en intégrant des données off-policy dans le cadre proposé. Finalement, nous étudions de quelle manière l'agent peut apprendre plus rapidement en tirant parti du développement de son corps, en particulier, en procédant par une augmentation progressive de la dimensionnalité de son espace sensorimoteurReinforcement learning allows an agent to learn a behavior that has never been previously defined by humans. The agent discovers the environment and the different consequences of its actions through its interaction: it learns from its own experience, without having pre-established knowledge of the goals or effects of its actions. This thesis tackles how deep learning can help reinforcement learning to handle continuous spaces and environments with many degrees of freedom in order to solve problems closer to reality. Indeed, neural networks have a good scalability and representativeness. They make possible to approximate functions on continuous spaces and allow a developmental approach, because they require little a priori knowledge on the domain. We seek to reduce the amount of necessary interaction of the agent to achieve acceptable behavior. To do so, we proposed the Neural Fitted Actor-Critic framework that defines several data efficient actor-critic algorithms. We examine how the agent can fully exploit the transitions generated by previous behaviors by integrating off-policy data into the proposed framework. Finally, we study how the agent can learn faster by taking advantage of the development of his body, in particular, by proceeding with a gradual increase in the dimensionality of its sensorimotor spac
    corecore