102,768 research outputs found
Off-Policy Deep Reinforcement Learning by Bootstrapping the Covariate Shift
In this paper we revisit the method of off-policy corrections for
reinforcement learning (COP-TD) pioneered by Hallak et al. (2017). Under this
method, online updates to the value function are reweighted to avoid divergence
issues typical of off-policy learning. While Hallak et al.'s solution is
appealing, it cannot easily be transferred to nonlinear function approximation.
First, it requires a projection step onto the probability simplex; second, even
though the operator describing the expected behavior of the off-policy learning
algorithm is convergent, it is not known to be a contraction mapping, and
hence, may be more unstable in practice. We address these two issues by
introducing a discount factor into COP-TD. We analyze the behavior of
discounted COP-TD and find it better behaved from a theoretical perspective. We
also propose an alternative soft normalization penalty that can be minimized
online and obviates the need for an explicit projection step. We complement our
analysis with an empirical evaluation of the two techniques in an off-policy
setting on the game Pong from the Atari domain where we find discounted COP-TD
to be better behaved in practice than the soft normalization penalty. Finally,
we perform a more extensive evaluation of discounted COP-TD in 5 games of the
Atari domain, where we find performance gains for our approach.Comment: AAAI 201
Trajectory-Based Off-Policy Deep Reinforcement Learning
Policy gradient methods are powerful reinforcement learning algorithms and
have been demonstrated to solve many complex tasks. However, these methods are
also data-inefficient, afflicted with high variance gradient estimates, and
frequently get stuck in local optima. This work addresses these weaknesses by
combining recent improvements in the reuse of off-policy data and exploration
in parameter space with deterministic behavioral policies. The resulting
objective is amenable to standard neural network optimization strategies like
stochastic gradient descent or stochastic gradient Hamiltonian Monte Carlo.
Incorporation of previous rollouts via importance sampling greatly improves
data-efficiency, whilst stochastic optimization schemes facilitate the escape
from local optima. We evaluate the proposed approach on a series of continuous
control benchmark tasks. The results show that the proposed algorithm is able
to successfully and reliably learn solutions using fewer system interactions
than standard policy gradient methods.Comment: Includes appendix. Accepted for ICML 201
- …