14 research outputs found
On the Global Convergence Rates of Softmax Policy Gradient Methods
We make three contributions toward better understanding policy gradient
methods in the tabular setting. First, we show that with the true gradient,
policy gradient with a softmax parametrization converges at a rate,
with constants depending on the problem and initialization. This result
significantly expands the recent asymptotic convergence results. The analysis
relies on two findings: that the softmax policy gradient satisfies a
\L{}ojasiewicz inequality, and the minimum probability of an optimal action
during optimization can be bounded in terms of its initial value. Second, we
analyze entropy regularized policy gradient and show that it enjoys a
significantly faster linear convergence rate toward softmax optimal
policy. This result resolves an open question in the recent literature.
Finally, combining the above two results and additional new lower
bound results, we explain how entropy regularization improves policy
optimization, even with the true gradient, from the perspective of convergence
rate. The separation of rates is further explained using the notion of
non-uniform \L{}ojasiewicz degree. These results provide a theoretical
understanding of the impact of entropy and corroborate existing empirical
studies.Comment: 64 pages, 5 figures. Published in ICML 202