5 research outputs found

    Support vector machine classifier via L0/1 soft-margin loss

    Get PDF
    Support vector machines (SVM) have drawn wide attention for the last two decades due to its extensive applications, so a vast body of work has developed optimization algorithms to solve SVM with various soft-margin losses. To distinguish all, in this paper, we aim at solving an ideal soft-margin loss SVM: L0/1 soft-margin loss SVM (dubbed as L0/1 -SVM). Many of the existing (non)convex soft-margin losses can be viewed as one of the surrogates of the L0/1 soft-margin loss. Despite its discrete nature, we manage to establish the optimality theory for the L0/1 -SVM including the existence of the optimal solutions, the relationship between them and P-stationary points. These not only enable us to deliver a rigorous definition of L0/1 support vectors but also allow us to define a working set. Integrating such a working set, a fast alternating direction method of multipliers is then proposed with its limit point being a locally optimal solution to the L0/1 -SVM. Finally, numerical experiments demonstrate that our proposed method outperforms some leading classification solvers from SVM communities, in terms of faster computational speed and a fewer number of support vectors. The bigger the data size is, the more evident its advantage appears

    Fixed-size Pegasos for Hinge and Pinball Loss SVM

    No full text
    Pegasos has become a widely acknowledged algorithm for learning linear Support Vector Machines. It utilizes properties of hinge loss and theory of strongly convex optimization problems for fast convergence rates and lower computational and memory costs. In this paper we adopt the recently proposed pinball loss for the Pegasos algorithm and show some advantages of using it in a variety of classification problems. First we present the newly derived Pegasos optimization objective with respect to pinball loss and analyze its properties and convergence rates. Additionally we present extensions of the Pegasos algorithm applied to the kernel-induced and Nystrȯm approximated feature maps which introduce non-linearity in the input space. This is done using a Fixed-Size kernel method approach. Second we give experimental results for publicly available UCI datasets to justify the advantages and the importance of pinball loss for achieving a better classification accuracy and greater numerical stability in the partially or fully stochastic setting. Finally we conclude our paper with a brief discussion of the applicability of pinball loss to real-life problems. © 2013 IEEE.status: publishe
    corecore