We analyze the convergence rate of the unregularized natural policy gradient
algorithm with log-linear policy parametrizations in infinite-horizon
discounted Markov decision processes. In the deterministic case, when the
Q-value is known and can be approximated by a linear combination of a known
feature function up to a bias error, we show that a geometrically-increasing
step size yields a linear convergence rate towards an optimal policy. We then
consider the sample-based case, when the best representation of the Q- value
function among linear combinations of a known feature function is known up to
an estimation error. In this setting, we show that the algorithm enjoys the
same linear guarantees as in the deterministic case up to an error term that
depends on the estimation error, the bias error, and the condition number of
the feature covariance matrix. Our results build upon the general framework of
policy mirror descent and extend previous findings for the softmax tabular
parametrization to the log-linear policy class