research

Parallel Support Vector Machines

Abstract

The Support Vector Machine (SVM) is a supervised algorithm for the solution of classification and regression problems. SVMs have gained widespread use in recent years because of successful applications like character recognition and the profound theoretical underpinnings concerning generalization performance. Yet, one of the remaining drawbacks of the SVM algorithm is its high computational demands during the training and testing phase. This article describes how to efficiently parallelize SVM training in order to cut down execution times. The parallelization technique employed is based on a decomposition approach, where the inner quadratic program (QP) is solved using Sequential Minimal Optimization (SMO). Thus all types of SVM formulations can be solved in parallel, including C-SVC and nu-SVC for classification as well as epsilon-SVR and nu-SVR for regression. Practical results show, that on most problems linear or even superlinear speedups can be attained

    Similar works