12,559 research outputs found
Optimal rates of aggregation in classification under low noise assumption
In the same spirit as Tsybakov (2003), we define the optimality of an
aggregation procedure in the problem of classification. Using an aggregate with
exponential weights, we obtain an optimal rate of convex aggregation for the
hinge risk under the margin assumption. Moreover we obtain an optimal rate of
model selection aggregation under the margin assumption for the excess Bayes
risk
Active Nearest-Neighbor Learning in Metric Spaces
We propose a pool-based non-parametric active learning algorithm for general
metric spaces, called MArgin Regularized Metric Active Nearest Neighbor
(MARMANN), which outputs a nearest-neighbor classifier. We give prediction
error guarantees that depend on the noisy-margin properties of the input
sample, and are competitive with those obtained by previously proposed passive
learners. We prove that the label complexity of MARMANN is significantly lower
than that of any passive learner with similar error guarantees. MARMANN is
based on a generalized sample compression scheme, and a new label-efficient
active model-selection procedure
Simultaneous adaptation to the margin and to complexity in classification
We consider the problem of adaptation to the margin and to complexity in
binary classification. We suggest an exponential weighting aggregation scheme.
We use this aggregation procedure to construct classifiers which adapt
automatically to margin and complexity. Two main examples are worked out in
which adaptivity is achieved in frameworks proposed by Steinwart and Scovel
[Learning Theory. Lecture Notes in Comput. Sci. 3559 (2005) 279--294. Springer,
Berlin; Ann. Statist. 35 (2007) 575--607] and Tsybakov [Ann. Statist. 32 (2004)
135--166]. Adaptive schemes, like ERM or penalized ERM, usually involve a
minimization step. This is not the case for our procedure.Comment: Published in at http://dx.doi.org/10.1214/009053607000000055 the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Inhibition in multiclass classification
The role of inhibition is investigated in a multiclass support vector machine formalism inspired by the brain structure of insects. The so-called mushroom bodies have a set of output neurons, or classification functions,
that compete with each other to encode a particular input. Strongly active output neurons depress or inhibit the remaining outputs without knowing which is correct or incorrect. Accordingly, we propose to use a
classification function that embodies unselective inhibition and train it in the large margin classifier framework. Inhibition leads to more robust classifiers in the sense that they perform better on larger areas of appropriate hyperparameters when assessed with leave-one-out strategies. We also show that the classifier with inhibition is a tight bound to probabilistic exponential models and is Bayes consistent for 3-class problems.
These properties make this approach useful for data sets with a limited number of labeled examples. For larger data sets, there is no significant comparative advantage to other multiclass SVM approaches
- …