Reinforcement learning provides a mathematical framework for learning-based
control, whose success largely depends on the amount of data it can utilize.
The efficient utilization of historical trajectories obtained from previous
policies is essential for expediting policy optimization. Empirical evidence
has shown that policy gradient methods based on importance sampling work well.
However, existing literature often neglect the interdependence between
trajectories from different iterations, and the good empirical performance
lacks a rigorous theoretical justification. In this paper, we study a variant
of the natural policy gradient method with reusing historical trajectories via
importance sampling. We show that the bias of the proposed estimator of the
gradient is asymptotically negligible, the resultant algorithm is convergent,
and reusing past trajectories helps improve the convergence rate. We further
apply the proposed estimator to popular policy optimization algorithms such as
trust region policy optimization. Our theoretical results are verified on
classical benchmarks