6 research outputs found
Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and Variance Reduction
Asynchronous Q-learning aims to learn the optimal action-value function (or
Q-function) of a Markov decision process (MDP), based on a single trajectory of
Markovian samples induced by a behavior policy. Focusing on a
-discounted MDP with state space and action space
, we demonstrate that the -based sample complexity
of classical asynchronous Q-learning -- namely, the number of samples needed to
yield an entrywise -accurate estimate of the Q-function -- is at
most on the order of \begin{equation*}
\frac{1}{\mu_{\mathsf{min}}(1-\gamma)^5\varepsilon^2}+
\frac{t_{\mathsf{mix}}}{\mu_{\mathsf{min}}(1-\gamma)} \end{equation*} up to
some logarithmic factor, provided that a proper constant learning rate is
adopted. Here, and denote respectively
the mixing time and the minimum state-action occupancy probability of the
sample trajectory. The first term of this bound matches the complexity in the
case with independent samples drawn from the stationary distribution of the
trajectory. The second term reflects the expense taken for the empirical
distribution of the Markovian trajectory to reach a steady state, which is
incurred at the very beginning and becomes amortized as the algorithm runs.
Encouragingly, the above bound improves upon the state-of-the-art result by a
factor of at least . Further, the scaling on the
discount complexity can be improved by means of variance reduction.Comment: accepted in part to Neural Information Processing Systems (NeurIPS)
202