186,553 research outputs found
Rates of convergence in active learning
We study the rates of convergence in generalization error achievable by
active learning under various types of label noise. Additionally, we study the
general problem of model selection for active learning with a nested hierarchy
of hypothesis classes and propose an algorithm whose error rate provably
converges to the best achievable error among classifiers in the hierarchy at a
rate adaptive to both the complexity of the optimal classifier and the noise
conditions. In particular, we state sufficient conditions for these rates to be
dramatically faster than those achievable by passive learning.Comment: Published in at http://dx.doi.org/10.1214/10-AOS843 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- …