Location of Repository

Practical Methods for Exploiting Bounds on Change in the Margin

By Andrew Guillory and Jeff Bilmes


We present methods for speeding up the training and evaluation of linear and kernel classifiers by exploiting bounds on change in the margin for data points. Assuming the classifier’s decision boundary changes slowly, we show we can avoid recalculating the margin for many data points. We discuss several extensions and applications of this simple technique and show results applying it to gradient descent and stochastic subgradient descent.

Year: 2011
OAI identifier: oai:CiteSeerX.psu:
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://www.cs.washington.edu/h... (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.