1,095 research outputs found
Active learning using arbitrary binary valued queries
Cover title.Includes bibliographical references (leaves 11-12).Research supported by the U.S. Army Research Office. DAAL03-86-K-0171 Research supported by the National Science Foundation. ECS-8552419 Research supported by the Department of the Navy under an Air Force contract. F19628-90-C-0002S.R. Kulkarni, S.K. Mitter, J.N. Tsitsiklis
Maximum Margin Multiclass Nearest Neighbors
We develop a general framework for margin-based multicategory classification
in metric spaces. The basic work-horse is a margin-regularized version of the
nearest-neighbor classifier. We prove generalization bounds that match the
state of the art in sample size and significantly improve the dependence on
the number of classes . Our point of departure is a nearly Bayes-optimal
finite-sample risk bound independent of . Although -free, this bound is
unregularized and non-adaptive, which motivates our main result: Rademacher and
scale-sensitive margin bounds with a logarithmic dependence on . As the best
previous risk estimates in this setting were of order , our bound is
exponentially sharper. From the algorithmic standpoint, in doubling metric
spaces our classifier may be trained on examples in time and
evaluated on new points in time
PAC learning with generalized samples and an application to stochastic geometry
Includes bibliographical references (p. 16-17).Caption title.Research supported by the National Science Foundation. ECS-8552419 Research supported by the U.S. Army Research Office. DAAL01-86-K-0171 Research supported by the Dept. of the Navy under an Air Force Contract. F19628-90-C-0002S.R. Kulkarni ... [et al.]
Online Local Learning via Semidefinite Programming
In many online learning problems we are interested in predicting local
information about some universe of items. For example, we may want to know
whether two items are in the same cluster rather than computing an assignment
of items to clusters; we may want to know which of two teams will win a game
rather than computing a ranking of teams. Although finding the optimal
clustering or ranking is typically intractable, it may be possible to predict
the relationships between items as well as if you could solve the global
optimization problem exactly.
Formally, we consider an online learning problem in which a learner
repeatedly guesses a pair of labels (l(x), l(y)) and receives an adversarial
payoff depending on those labels. The learner's goal is to receive a payoff
nearly as good as the best fixed labeling of the items. We show that a simple
algorithm based on semidefinite programming can obtain asymptotically optimal
regret in the case where the number of possible labels is O(1), resolving an
open problem posed by Hazan, Kale, and Shalev-Schwartz. Our main technical
contribution is a novel use and analysis of the log determinant regularizer,
exploiting the observation that log det(A + I) upper bounds the entropy of any
distribution with covariance matrix A.Comment: 10 page
- …