We investigate the problem of learning an ϵ-approximate solution for
the discrete-time Linear Quadratic Regulator (LQR) problem via a Stochastic
Variance-Reduced Policy Gradient (SVRPG) approach. Whilst policy gradient
methods have proven to converge linearly to the optimal solution of the
model-free LQR problem, the substantial requirement for two-point cost queries
in gradient estimations may be intractable, particularly in applications where
obtaining cost function evaluations at two distinct control input
configurations is exceptionally costly. To this end, we propose an
oracle-efficient approach. Our method combines both one-point and two-point
estimations in a dual-loop variance-reduced algorithm. It achieves an
approximate optimal solution with only
O(log(1/ϵ)β) two-point cost information
for β∈(0,1)