6,894 research outputs found
Rates of convergence in active learning
We study the rates of convergence in generalization error achievable by
active learning under various types of label noise. Additionally, we study the
general problem of model selection for active learning with a nested hierarchy
of hypothesis classes and propose an algorithm whose error rate provably
converges to the best achievable error among classifiers in the hierarchy at a
rate adaptive to both the complexity of the optimal classifier and the noise
conditions. In particular, we state sufficient conditions for these rates to be
dramatically faster than those achievable by passive learning.Comment: Published in at http://dx.doi.org/10.1214/10-AOS843 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Robust Interactive Learning
In this paper we propose and study a generalization of the standard
active-learning model where a more general type of query, class conditional
query, is allowed. Such queries have been quite useful in applications, but
have been lacking theoretical understanding. In this work, we characterize the
power of such queries under two well-known noise models. We give nearly tight
upper and lower bounds on the number of queries needed to learn both for the
general agnostic setting and for the bounded noise model. We further show that
our methods can be made adaptive to the (unknown) noise rate, with only
negligible loss in query complexity
- …