4,833 research outputs found
Deep Residual Reinforcement Learning
We revisit residual algorithms in both model-free and model-based
reinforcement learning settings. We propose the bidirectional target network
technique to stabilize residual algorithms, yielding a residual version of DDPG
that significantly outperforms vanilla DDPG in the DeepMind Control Suite
benchmark. Moreover, we find the residual algorithm an effective approach to
the distribution mismatch problem in model-based planning. Compared with the
existing TD() method, our residual-based method makes weaker assumptions
about the model and yields a greater performance boost.Comment: AAMAS 202
Generalized Off-Policy Actor-Critic
We propose a new objective, the counterfactual objective, unifying existing
objectives for off-policy policy gradient algorithms in the continuing
reinforcement learning (RL) setting. Compared to the commonly used excursion
objective, which can be misleading about the performance of the target policy
when deployed, our new objective better predicts such performance. We prove the
Generalized Off-Policy Policy Gradient Theorem to compute the policy gradient
of the counterfactual objective and use an emphatic approach to get an unbiased
sample from this policy gradient, yielding the Generalized Off-Policy
Actor-Critic (Geoff-PAC) algorithm. We demonstrate the merits of Geoff-PAC over
existing algorithms in Mujoco robot simulation tasks, the first empirical
success of emphatic algorithms in prevailing deep RL benchmarks.Comment: NeurIPS 201
Two Timescale Stochastic Approximation with Controlled Markov noise and Off-policy temporal difference learning
We present for the first time an asymptotic convergence analysis of two
time-scale stochastic approximation driven by `controlled' Markov noise. In
particular, both the faster and slower recursions have non-additive controlled
Markov noise components in addition to martingale difference noise. We analyze
the asymptotic behavior of our framework by relating it to limiting
differential inclusions in both time-scales that are defined in terms of the
ergodic occupation measures associated with the controlled Markov processes.
Finally, we present a solution to the off-policy convergence problem for
temporal difference learning with linear function approximation, using our
results.Comment: 23 pages (relaxed some important assumptions from the previous
version), accepted in Mathematics of Operations Research in Feb, 201
- …