7,504 research outputs found
Q-learning with Nearest Neighbors
We consider model-free reinforcement learning for infinite-horizon discounted
Markov Decision Processes (MDPs) with a continuous state space and unknown
transition kernel, when only a single sample path under an arbitrary policy of
the system is available. We consider the Nearest Neighbor Q-Learning (NNQL)
algorithm to learn the optimal Q function using nearest neighbor regression
method. As the main contribution, we provide tight finite sample analysis of
the convergence rate. In particular, for MDPs with a -dimensional state
space and the discounted factor , given an arbitrary sample
path with "covering time" , we establish that the algorithm is guaranteed
to output an -accurate estimate of the optimal Q-function using
samples. For instance, for a
well-behaved MDP, the covering time of the sample path under the purely random
policy scales as so the sample
complexity scales as Indeed, we
establish a lower bound that argues that the dependence of is necessary.Comment: Accepted to NIPS 201
Tight Performance Bounds for Approximate Modified Policy Iteration with Non-Stationary Policies
We consider approximate dynamic programming for the infinite-horizon
stationary -discounted optimal control problem formalized by Markov
Decision Processes. While in the exact case it is known that there always
exists an optimal policy that is stationary, we show that when using value
function approximation, looking for a non-stationary policy may lead to a
better performance guarantee. We define a non-stationary variant of MPI that
unifies a broad family of approximate DP algorithms of the literature. For this
algorithm we provide an error propagation analysis in the form of a performance
bound of the resulting policies that can improve the usual performance bound by
a factor , which is significant when the discount factor
is close to 1. Doing so, our approach unifies recent results for Value and
Policy Iteration. Furthermore, we show, by constructing a specific
deterministic MDP, that our performance guarantee is tight
On the Use of Non-Stationary Policies for Stationary Infinite-Horizon Markov Decision Processes
We consider infinite-horizon stationary -discounted Markov Decision
Processes, for which it is known that there exists a stationary optimal policy.
Using Value and Policy Iteration with some error at each iteration,
it is well-known that one can compute stationary policies that are
-optimal. After arguing that this
guarantee is tight, we develop variations of Value and Policy Iteration for
computing non-stationary policies that can be up to
-optimal, which constitutes a significant
improvement in the usual situation when is close to 1. Surprisingly,
this shows that the problem of "computing near-optimal non-stationary policies"
is much simpler than that of "computing near-optimal stationary policies"
- …