9,675 research outputs found

    Q-learning with Nearest Neighbors

    Full text link
    We consider model-free reinforcement learning for infinite-horizon discounted Markov Decision Processes (MDPs) with a continuous state space and unknown transition kernel, when only a single sample path under an arbitrary policy of the system is available. We consider the Nearest Neighbor Q-Learning (NNQL) algorithm to learn the optimal Q function using nearest neighbor regression method. As the main contribution, we provide tight finite sample analysis of the convergence rate. In particular, for MDPs with a dd-dimensional state space and the discounted factor γ∈(0,1)\gamma \in (0,1), given an arbitrary sample path with "covering time" L L , we establish that the algorithm is guaranteed to output an ε\varepsilon-accurate estimate of the optimal Q-function using O~(L/(ε3(1−γ)7))\tilde{O}\big(L/(\varepsilon^3(1-\gamma)^7)\big) samples. For instance, for a well-behaved MDP, the covering time of the sample path under the purely random policy scales as O~(1/εd), \tilde{O}\big(1/\varepsilon^d\big), so the sample complexity scales as O~(1/εd+3).\tilde{O}\big(1/\varepsilon^{d+3}\big). Indeed, we establish a lower bound that argues that the dependence of Ω~(1/εd+2) \tilde{\Omega}\big(1/\varepsilon^{d+2}\big) is necessary.Comment: Accepted to NIPS 201

    Weak consistency of the 1-nearest neighbor measure with applications to missing data

    Full text link
    When data is partially missing at random, imputation and importance weighting are often used to estimate moments of the unobserved population. In this paper, we study 1-nearest neighbor (1NN) importance weighting, which estimates moments by replacing missing data with the complete data that is the nearest neighbor in the non-missing covariate space. We define an empirical measure, the 1NN measure, and show that it is weakly consistent for the measure of the missing data. The main idea behind this result is that the 1NN measure is performing inverse probability weighting in the limit. We study applications to missing data and mitigating the impact of covariate shift in prediction tasks

    An adaptive nearest neighbor rule for classification

    Full text link
    We introduce a variant of the kk-nearest neighbor classifier in which kk is chosen adaptively for each query, rather than supplied as a parameter. The choice of kk depends on properties of each neighborhood, and therefore may significantly vary between different points. (For example, the algorithm will use larger kk for predicting the labels of points in noisy regions.) We provide theory and experiments that demonstrate that the algorithm performs comparably to, and sometimes better than, kk-NN with an optimal choice of kk. In particular, we derive bounds on the convergence rates of our classifier that depend on a local quantity we call the `advantage' which is significantly weaker than the Lipschitz conditions used in previous convergence rate proofs. These generalization bounds hinge on a variant of the seminal Uniform Convergence Theorem due to Vapnik and Chervonenkis; this variant concerns conditional probabilities and may be of independent interest

    Rates of convergence for nearest neighbor estimators with the smoother regression function

    Full text link
    In regression analysis one wants to estimate the regression function from a data. In this paper we consider the rate of convergence for the nearest neighbor estimator in case that the regression function is (p,C)(p,C)-smooth. It is an open problem whether the optimal rate can be achieved by some nearest neighbor estimator in case that pp is on (1,1.5]. We solve the problem affirmatively. This is the main result of this paper. Throughout this paper, we assume that the data is independent and identically distributed and as an error criterion we use the expected L2L_2 error.Comment: 12 pages, 1 tabl

    Consistency in Models for Distributed Learning under Communication Constraints

    Full text link
    Motivated by sensor networks and other distributed settings, several models for distributed learning are presented. The models differ from classical works in statistical pattern recognition by allocating observations of an independent and identically distributed (i.i.d.) sampling process amongst members of a network of simple learning agents. The agents are limited in their ability to communicate to a central fusion center and thus, the amount of information available for use in classification or regression is constrained. For several basic communication models in both the binary classification and regression frameworks, we question the existence of agent decision rules and fusion rules that result in a universally consistent ensemble. The answers to this question present new issues to consider with regard to universal consistency. Insofar as these models present a useful picture of distributed scenarios, this paper addresses the issue of whether or not the guarantees provided by Stone's Theorem in centralized environments hold in distributed settings.Comment: To appear in the IEEE Transactions on Information Theor
    • …
    corecore