In this paper we consider the problem of computing an ϵ-optimal
policy of a discounted Markov Decision Process (DMDP) provided we can only
access its transition function through a generative sampling model that given
any state-action pair samples from the transition function in O(1) time.
Given such a DMDP with states S, actions A, discount factor
γ∈(0,1), and rewards in range [0,1] we provide an algorithm which
computes an ϵ-optimal policy with probability 1−δ where
\emph{both} the time spent and number of sample taken are upper bounded by O[(1−γ)3ϵ2∣S∣∣A∣log((1−γ)δϵ∣S∣∣A∣)log((1−γ)ϵ1)]. For fixed values
of ϵ, this improves upon the previous best known bounds by a factor of
(1−γ)−1 and matches the sample complexity lower bounds proved in
Azar et al. (2013) up to logarithmic factors. We also extend our method to
computing ϵ-optimal policies for finite-horizon MDP with a generative
model and provide a nearly matching sample complexity lower bound.Comment: 31 pages. Accepted to NeurIPS, 201