1 research outputs found
Global Convergence of Policy Gradient Methods for the Linear Quadratic Regulator
Direct policy gradient methods for reinforcement learning and continuous
control problems are a popular approach for a variety of reasons: 1) they are
easy to implement without explicit knowledge of the underlying model 2) they
are an "end-to-end" approach, directly optimizing the performance metric of
interest 3) they inherently allow for richly parameterized policies. A notable
drawback is that even in the most basic continuous control problem (that of
linear quadratic regulators), these methods must solve a non-convex
optimization problem, where little is understood about their efficiency from
both computational and statistical perspectives. In contrast, system
identification and model based planning in optimal control theory have a much
more solid theoretical footing, where much is known with regards to their
computational and statistical properties. This work bridges this gap showing
that (model free) policy gradient methods globally converge to the optimal
solution and are efficient (polynomially so in relevant problem dependent
quantities) with regards to their sample and computational complexities