1 research outputs found

    A Smoothing Algorithm for l1 Support Vector Machines

    Full text link
    A smoothing algorithm is presented for solving the soft-margin Support Vector Machine (SVM) optimization problem with an 1\ell^{1} penalty. This algorithm is designed to require a modest number of passes over the data, which is an important measure of its cost for very large datasets. The algorithm uses smoothing for the hinge-loss function, and an active set approach for the 1\ell^{1} penalty. The smoothing parameter α\alpha is initially large, but typically halved when the smoothed problem is solved to sufficient accuracy. Convergence theory is presented that shows O(1+log(1+log+(1/α)))\mathcal{O}(1+\log(1+\log_+(1/\alpha))) guarded Newton steps for each value of α\alpha except for asymptotic bands α=Θ(1)\alpha=\Theta(1) and α=Θ(1/N)\alpha=\Theta(1/N), with only one Newton step provided ηα1/N\eta\alpha\gg1/N, where NN is the number of data points and the stopping criterion that the predicted reduction is less than ηα\eta\alpha. The experimental results show that our algorithm is capable of strong test accuracy without sacrificing training speed.Comment: arXiv admin note: text overlap with arXiv:1808.0710
    corecore