4 research outputs found
Breaking the Deadly Triad with a Target Network
The deadly triad refers to the instability of a reinforcement learning
algorithm when it employs off-policy learning, function approximation, and
bootstrapping simultaneously. In this paper, we investigate the target network
as a tool for breaking the deadly triad, providing theoretical support for the
conventional wisdom that a target network stabilizes training. We first propose
and analyze a novel target network update rule which augments the commonly used
Polyak-averaging style update with two projections. We then apply the target
network and ridge regularization in several divergent algorithms and show their
convergence to regularized TD fixed points. Those algorithms are off-policy
with linear function approximation and bootstrapping, spanning both policy
evaluation and control, as well as both discounted and average-reward settings.
In particular, we provide the first convergent linear -learning algorithms
under nonrestrictive and changing behavior policies without bi-level
optimization.Comment: ICML 202
Sample Complexity of Policy Gradient Finding Second-Order Stationary Points
The goal of policy-based reinforcement learning (RL) is to search the maximal
point of its objective. However, due to the inherent non-concavity of its
objective, convergence to a first-order stationary point (FOSP) can not
guarantee the policy gradient methods finding a maximal point. A FOSP can be a
minimal or even a saddle point, which is undesirable for RL. Fortunately, if
all the saddle points are \emph{strict}, all the second-order stationary points
(SOSP) are exactly equivalent to local maxima. Instead of FOSP, we consider
SOSP as the convergence criteria to character the sample complexity of policy
gradient. Our result shows that policy gradient converges to an
-SOSP with probability at least
after the total cost of
,
where . Our result improves the state-of-the-art result
significantly where it requires
.
Our analysis is based on the key idea that decomposes the parameter space
into three non-intersected regions: non-stationary point, saddle
point, and local optimal region, then making a local improvement of the
objective of RL in each region. This technique can be potentially generalized
to extensive policy gradient methods.Comment: This submission has been accepted by AAAI202