51 research outputs found
Generalized Emphatic Temporal Difference Learning: Bias-Variance Analysis
We consider the off-policy evaluation problem in Markov decision processes
with function approximation. We propose a generalization of the recently
introduced \emph{emphatic temporal differences} (ETD) algorithm
\citep{SuttonMW15}, which encompasses the original ETD(), as well as
several other off-policy evaluation algorithms as special cases. We call this
framework \ETD, where our introduced parameter controls the decay rate
of an importance-sampling term. We study conditions under which the projected
fixed-point equation underlying \ETD\ involves a contraction operator, allowing
us to present the first asymptotic error bounds (bias) for \ETD. Our results
show that the original ETD algorithm always involves a contraction operator,
and its bias is bounded. Moreover, by controlling , our proposed
generalization allows trading-off bias for variance reduction, thereby
achieving a lower total error.Comment: arXiv admin note: text overlap with arXiv:1508.0341
An Emphatic Approach to the Problem of Off-policy Temporal-Difference Learning
In this paper we introduce the idea of improving the performance of
parametric temporal-difference (TD) learning algorithms by selectively
emphasizing or de-emphasizing their updates on different time steps. In
particular, we show that varying the emphasis of linear TD()'s updates
in a particular way causes its expected update to become stable under
off-policy training. The only prior model-free TD methods to achieve this with
per-step computation linear in the number of function approximation parameters
are the gradient-TD family of methods including TDC, GTD(), and
GQ(). Compared to these methods, our _emphatic TD()_ is
simpler and easier to use; it has only one learned parameter vector and one
step-size parameter. Our treatment includes general state-dependent discounting
and bootstrapping functions, and a way of specifying varying degrees of
interest in accurately valuing different states.Comment: 29 pages This is a significant revision based on the first set of
reviews. The most important change was to signal early that the main result
is about stability, not convergenc
Off-Policy Deep Reinforcement Learning by Bootstrapping the Covariate Shift
In this paper we revisit the method of off-policy corrections for
reinforcement learning (COP-TD) pioneered by Hallak et al. (2017). Under this
method, online updates to the value function are reweighted to avoid divergence
issues typical of off-policy learning. While Hallak et al.'s solution is
appealing, it cannot easily be transferred to nonlinear function approximation.
First, it requires a projection step onto the probability simplex; second, even
though the operator describing the expected behavior of the off-policy learning
algorithm is convergent, it is not known to be a contraction mapping, and
hence, may be more unstable in practice. We address these two issues by
introducing a discount factor into COP-TD. We analyze the behavior of
discounted COP-TD and find it better behaved from a theoretical perspective. We
also propose an alternative soft normalization penalty that can be minimized
online and obviates the need for an explicit projection step. We complement our
analysis with an empirical evaluation of the two techniques in an off-policy
setting on the game Pong from the Atari domain where we find discounted COP-TD
to be better behaved in practice than the soft normalization penalty. Finally,
we perform a more extensive evaluation of discounted COP-TD in 5 games of the
Atari domain, where we find performance gains for our approach.Comment: AAAI 201
- …