15 research outputs found
Boosted Bellman Residual Minimization Handling Expert Demonstrations
International audienceThis paper addresses the problem of batch Reinforcement Learning with Expert Demonstrations (RLED). In RLED, the goal is to find an optimal policy of a Markov Decision Process (MDP), using a data set of fixed sampled transitions of the MDP as well as a data set of fixed expert demonstrations. This is slightly different from the batch Reinforcement Learning (RL) framework where only fixed sampled transitions of the MDP are available. Thus, the aim of this article is to propose algorithms that leverage those expert data. The idea proposed here differs from the Approximate Dynamic Programming methods in the sense that we minimize the Optimal Bellman Residual (OBR), where the minimization is guided by constraints defined by the expert demonstrations. This choice is motivated by the the fact that controlling the OBR implies controlling the distance between the estimated and optimal quality functions. However, this method presents some difficulties as the criterion to minimize is non-convex, non-differentiable and biased. Those difficulties are overcome via the embedding of distributions in a Reproducing Kernel Hilbert Space (RKHS) and a boosting technique which allows obtaining non-parametric algorithms. Finally, our algorithms are compared to the only state of the art algorithm, Approximate Policy Iteration with Demonstrations (APID) algorithm, in different experimental settings
Deep Q-learning from Demonstrations
Deep reinforcement learning (RL) has achieved several high profile successes
in difficult decision-making problems. However, these algorithms typically
require a huge amount of data before they reach reasonable performance. In
fact, their performance during learning can be extremely poor. This may be
acceptable for a simulator, but it severely limits the applicability of deep RL
to many real-world tasks, where the agent must learn in the real environment.
In this paper we study a setting where the agent may access data from previous
control of the system. We present an algorithm, Deep Q-learning from
Demonstrations (DQfD), that leverages small sets of demonstration data to
massively accelerate the learning process even from relatively small amounts of
demonstration data and is able to automatically assess the necessary ratio of
demonstration data while learning thanks to a prioritized replay mechanism.
DQfD works by combining temporal difference updates with supervised
classification of the demonstrator's actions. We show that DQfD has better
initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN)
as it starts with better scores on the first million steps on 41 of 42 games
and on average it takes PDD DQN 83 million steps to catch up to DQfD's
performance. DQfD learns to out-perform the best demonstration given in 14 of
42 games. In addition, DQfD leverages human demonstrations to achieve
state-of-the-art results for 11 games. Finally, we show that DQfD performs
better than three related algorithms for incorporating demonstration data into
DQN.Comment: Published at AAAI 2018. Previously on arxiv as "Learning from
Demonstrations for Real World Reinforcement Learning
Self-Imitation Advantage Learning
Self-imitation learning is a Reinforcement Learning (RL) method that
encourages actions whose returns were higher than expected, which helps in hard
exploration and sparse reward problems. It was shown to improve the performance
of on-policy actor-critic methods in several discrete control tasks.
Nevertheless, applying self-imitation to the mostly action-value based
off-policy RL methods is not straightforward. We propose SAIL, a novel
generalization of self-imitation learning for off-policy RL, based on a
modification of the Bellman optimality operator that we connect to Advantage
Learning. Crucially, our method mitigates the problem of stale returns by
choosing the most optimistic return estimate between the observed return and
the current action-value for self-imitation. We demonstrate the empirical
effectiveness of SAIL on the Arcade Learning Environment, with a focus on hard
exploration games.Comment: AAMAS 202