6 research outputs found
Sample Complexity of Policy Gradient Finding Second-Order Stationary Points
The goal of policy-based reinforcement learning (RL) is to search the maximal
point of its objective. However, due to the inherent non-concavity of its
objective, convergence to a first-order stationary point (FOSP) can not
guarantee the policy gradient methods finding a maximal point. A FOSP can be a
minimal or even a saddle point, which is undesirable for RL. Fortunately, if
all the saddle points are \emph{strict}, all the second-order stationary points
(SOSP) are exactly equivalent to local maxima. Instead of FOSP, we consider
SOSP as the convergence criteria to character the sample complexity of policy
gradient. Our result shows that policy gradient converges to an
-SOSP with probability at least
after the total cost of
,
where . Our result improves the state-of-the-art result
significantly where it requires
.
Our analysis is based on the key idea that decomposes the parameter space
into three non-intersected regions: non-stationary point, saddle
point, and local optimal region, then making a local improvement of the
objective of RL in each region. This technique can be potentially generalized
to extensive policy gradient methods.Comment: This submission has been accepted by AAAI202