4 research outputs found

    Breaking the Deadly Triad with a Target Network

    Full text link
    The deadly triad refers to the instability of a reinforcement learning algorithm when it employs off-policy learning, function approximation, and bootstrapping simultaneously. In this paper, we investigate the target network as a tool for breaking the deadly triad, providing theoretical support for the conventional wisdom that a target network stabilizes training. We first propose and analyze a novel target network update rule which augments the commonly used Polyak-averaging style update with two projections. We then apply the target network and ridge regularization in several divergent algorithms and show their convergence to regularized TD fixed points. Those algorithms are off-policy with linear function approximation and bootstrapping, spanning both policy evaluation and control, as well as both discounted and average-reward settings. In particular, we provide the first convergent linear QQ-learning algorithms under nonrestrictive and changing behavior policies without bi-level optimization.Comment: ICML 202

    Sample Complexity of Policy Gradient Finding Second-Order Stationary Points

    Full text link
    The goal of policy-based reinforcement learning (RL) is to search the maximal point of its objective. However, due to the inherent non-concavity of its objective, convergence to a first-order stationary point (FOSP) can not guarantee the policy gradient methods finding a maximal point. A FOSP can be a minimal or even a saddle point, which is undesirable for RL. Fortunately, if all the saddle points are \emph{strict}, all the second-order stationary points (SOSP) are exactly equivalent to local maxima. Instead of FOSP, we consider SOSP as the convergence criteria to character the sample complexity of policy gradient. Our result shows that policy gradient converges to an (ϵ,ϵχ)(\epsilon,\sqrt{\epsilon\chi})-SOSP with probability at least 1O~(δ)1-\widetilde{\mathcal{O}}(\delta) after the total cost of O(ϵ92(1γ)χlog1δ)\mathcal{O}\left(\dfrac{\epsilon^{-\frac{9}{2}}}{(1-\gamma)\sqrt\chi}\log\dfrac{1}{\delta}\right), where γ(0,1)\gamma\in(0,1). Our result improves the state-of-the-art result significantly where it requires O(ϵ9χ32δlog1ϵχ)\mathcal{O}\left(\dfrac{\epsilon^{-9}\chi^{\frac{3}{2}}}{\delta}\log\dfrac{1}{\epsilon\chi}\right). Our analysis is based on the key idea that decomposes the parameter space Rp\mathbb{R}^p into three non-intersected regions: non-stationary point, saddle point, and local optimal region, then making a local improvement of the objective of RL in each region. This technique can be potentially generalized to extensive policy gradient methods.Comment: This submission has been accepted by AAAI202
    corecore