9,675 research outputs found
Q-learning with Nearest Neighbors
We consider model-free reinforcement learning for infinite-horizon discounted
Markov Decision Processes (MDPs) with a continuous state space and unknown
transition kernel, when only a single sample path under an arbitrary policy of
the system is available. We consider the Nearest Neighbor Q-Learning (NNQL)
algorithm to learn the optimal Q function using nearest neighbor regression
method. As the main contribution, we provide tight finite sample analysis of
the convergence rate. In particular, for MDPs with a -dimensional state
space and the discounted factor , given an arbitrary sample
path with "covering time" , we establish that the algorithm is guaranteed
to output an -accurate estimate of the optimal Q-function using
samples. For instance, for a
well-behaved MDP, the covering time of the sample path under the purely random
policy scales as so the sample
complexity scales as Indeed, we
establish a lower bound that argues that the dependence of is necessary.Comment: Accepted to NIPS 201
Weak consistency of the 1-nearest neighbor measure with applications to missing data
When data is partially missing at random, imputation and importance weighting
are often used to estimate moments of the unobserved population. In this paper,
we study 1-nearest neighbor (1NN) importance weighting, which estimates moments
by replacing missing data with the complete data that is the nearest neighbor
in the non-missing covariate space. We define an empirical measure, the 1NN
measure, and show that it is weakly consistent for the measure of the missing
data. The main idea behind this result is that the 1NN measure is performing
inverse probability weighting in the limit. We study applications to missing
data and mitigating the impact of covariate shift in prediction tasks
An adaptive nearest neighbor rule for classification
We introduce a variant of the -nearest neighbor classifier in which is
chosen adaptively for each query, rather than supplied as a parameter. The
choice of depends on properties of each neighborhood, and therefore may
significantly vary between different points. (For example, the algorithm will
use larger for predicting the labels of points in noisy regions.)
We provide theory and experiments that demonstrate that the algorithm
performs comparably to, and sometimes better than, -NN with an optimal
choice of . In particular, we derive bounds on the convergence rates of our
classifier that depend on a local quantity we call the `advantage' which is
significantly weaker than the Lipschitz conditions used in previous convergence
rate proofs. These generalization bounds hinge on a variant of the seminal
Uniform Convergence Theorem due to Vapnik and Chervonenkis; this variant
concerns conditional probabilities and may be of independent interest
Rates of convergence for nearest neighbor estimators with the smoother regression function
In regression analysis one wants to estimate the regression function from a
data. In this paper we consider the rate of convergence for the nearest
neighbor estimator in case that the regression function is -smooth. It
is an open problem whether the optimal rate can be achieved by some nearest
neighbor estimator in case that is on (1,1.5]. We solve the problem
affirmatively. This is the main result of this paper. Throughout this paper, we
assume that the data is independent and identically distributed and as an error
criterion we use the expected error.Comment: 12 pages, 1 tabl
Consistency in Models for Distributed Learning under Communication Constraints
Motivated by sensor networks and other distributed settings, several models
for distributed learning are presented. The models differ from classical works
in statistical pattern recognition by allocating observations of an independent
and identically distributed (i.i.d.) sampling process amongst members of a
network of simple learning agents. The agents are limited in their ability to
communicate to a central fusion center and thus, the amount of information
available for use in classification or regression is constrained. For several
basic communication models in both the binary classification and regression
frameworks, we question the existence of agent decision rules and fusion rules
that result in a universally consistent ensemble. The answers to this question
present new issues to consider with regard to universal consistency. Insofar as
these models present a useful picture of distributed scenarios, this paper
addresses the issue of whether or not the guarantees provided by Stone's
Theorem in centralized environments hold in distributed settings.Comment: To appear in the IEEE Transactions on Information Theor
- …