58 research outputs found

    Nonlinear support vector machines through iterative majorization and I-splines

    Get PDF
    To minimize the primal support vector machine (SVM) problem, wepropose to use iterative majorization. To do so, we propose to use it-erative majorization. To allow for nonlinearity of the predictors, we use(non)monotone spline transformations. An advantage over the usual ker-nel approach in the dual problem is that the variables can be easily inter-preted. We illustrate this with an example from the literature.iterative majorization;support vector machines;I-Splines

    Nonlinear support vector machines through iterative majorization and I-splines

    Get PDF
    To minimize the primal support vector machine (SVM) problem, we propose to use iterative majorization. To do so, we propose to use it- erative majorization. To allow for nonlinearity of the predictors, we use (non)monotone spline transformations. An advantage over the usual ker- nel approach in the dual problem is that the variables can be easily inter- preted. We illustrate this with an example from the literature

    SVM-Maj: a majorization approach to linear support vector machines with different hinge errors

    Get PDF
    Support vector machines (SVM) are becoming increasingly popular for the prediction of a binary dependent variable. SVMs perform very well with respect to competing techniques. Often, the solution of an SVM is obtained by switching to the dual. In this paper, we stick to the primal support vector machine (SVM) problem, study its effective aspects, and propose varieties of convex loss functions such as the standard for SVM with the absolute hinge error as well as the quadratic hinge and the Huber hinge errors. We present an iterative majorization algorithm that minimizes each of the adaptations. In addition, we show that many of the features of an SVM are also obtained by an optimal scaling approach to regression. We illustrate this with an example from the literature and do a comparison of different methods on several empirical data sets.iterative majorization;I-splines;absolute hinge error;huber hinge error;optimal scaling;quadratic hinge error;support vector machines

    SVM-Maj: a majorization approach to linear support vector machines with different hinge errors

    Get PDF
    Support vector machines (SVM) are becoming increasingly popular for the prediction of a binary dependent variable. SVMs perform very well with respect to competing techniques. Often, the solution of an SVM is obtained by switching to the dual. In this paper, we stick to the primal support vector machine (SVM) problem, study its effective aspects, and propose varieties of convex loss functions such as the standard for SVM with the absolute hinge error as well as the quadratic hinge and the Huber hinge errors. We present an iterative majorization algorithm that minimizes each of the adaptations. In addition, we show that many of the features of an SVM are also obtained by an optimal scaling approach to regression. We illustrate this with an example from the literature and do a comparison of different methods on several empirical data sets

    The MM Alternative to EM

    Full text link
    The EM algorithm is a special case of a more general algorithm called the MM algorithm. Specific MM algorithms often have nothing to do with missing data. The first M step of an MM algorithm creates a surrogate function that is optimized in the second M step. In minimization, MM stands for majorize--minimize; in maximization, it stands for minorize--maximize. This two-step process always drives the objective function in the right direction. Construction of MM algorithms relies on recognizing and manipulating inequalities rather than calculating conditional expectations. This survey walks the reader through the construction of several specific MM algorithms. The potential of the MM algorithm in solving high-dimensional optimization and estimation problems is its most attractive feature. Our applications to random graph models, discriminant analysis and image restoration showcase this ability.Comment: Published in at http://dx.doi.org/10.1214/08-STS264 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    GenSVM: a generalized multiclass support vector machine

    Get PDF
    Traditional extensions of the binary support vector machine (SVM) to multiclass problems are either heuristics or require solving a large dual optimization problem. Here, a generalized multiclass SVM is proposed called GenSVM. In this method classification boundaries for a K-class problem are constructed in a (K - 1)-dimensional space using a simplex encoding. Additionally, several different weightings of the misclassification errors are incorporated in the loss function, such that it generalizes three existing multiclass SVMs through a single optimization problem. An iterative majorization algorithm is derived that solves the optimization problem without the need of a dual formulation. This algorithm has the advantage that it can use warm starts during cross validation and during a grid search, which signifficantly speeds up the training phase. Rigorous numerical experiments compare linear GenSVM with seven existing multiclass SVMs on both small and large data sets. These comparisons show that the proposed method is competitive with existing methods in both predictive accuracy and training time, and that it signiffcantly outperforms several existing methods on these criteria

    Algorithms for Multiclass Classification and Regularized Regression

    Get PDF

    Algorithms for Multiclass Classification and Regularized Regression

    Get PDF
    • …
    corecore