508 research outputs found
Constant Rate Approximate Maximum Margin Algorithms
We present a new class of perceptron-like algorithms with margin in which the “effective” learning rate, defined as the ratio of the learning rate to the length of the weight vector, remains constant. We prove that the new algorithms converge in a finite number of steps and show that there exists a limit of the parameters involved in which convergence leads to classification with maximum margin
From Cutting Planes Algorithms to Compression Schemes and Active Learning
Cutting-plane methods are well-studied localization(and optimization)
algorithms. We show that they provide a natural framework to perform
machinelearning ---and not just to solve optimization problems posed by
machinelearning--- in addition to their intended optimization use. In
particular, theyallow one to learn sparse classifiers and provide good
compression schemes.Moreover, we show that very little effort is required to
turn them intoeffective active learning methods. This last property provides a
generic way todesign a whole family of active learning algorithms from existing
passivemethods. We present numerical simulations testifying of the relevance
ofcutting-plane methods for passive and active learning tasks.Comment: IJCNN 2015, Jul 2015, Killarney, Ireland. 2015,
\<http://www.ijcnn.org/\&g
Calibrated Surrogate Losses for Classification with Label-Dependent Costs
We present surrogate regret bounds for arbitrary surrogate losses in the
context of binary classification with label-dependent costs. Such bounds relate
a classifier's risk, assessed with respect to a surrogate loss, to its
cost-sensitive classification risk. Two approaches to surrogate regret bounds
are developed. The first is a direct generalization of Bartlett et al. [2006],
who focus on margin-based losses and cost-insensitive classification, while the
second adopts the framework of Steinwart [2007] based on calibration functions.
Nontrivial surrogate regret bounds are shown to exist precisely when the
surrogate loss satisfies a "calibration" condition that is easily verified for
many common losses. We apply this theory to the class of uneven margin losses,
and characterize when these losses are properly calibrated. The uneven hinge,
squared error, exponential, and sigmoid losses are then treated in detail.Comment: 33 pages, 7 figure
- …