Data Selection for Support Vector Machine Classifiers

Abstract

The problem of extracting a minimal number of data points from a large dataset, in order to generate a support vector machine (SVM) classi er, is formulated as a concave minimization problem and solved by a nite number of linear programs. This minimal set of data points, which is the smallest number of support vectors that completely characterize a separating plane classi er, is considerably smaller than that required by a standard 1-norm support vector machine with or without feature selection. The proposed approach also incorporates a feature selection procedure that results in a minimal number of input features used by the classi er. Tenfold cross validation gives as good or better test results using the proposed minimal support vector machine (MSVM) classi er based on the smaller set of data points compared to a standard 1-norm support vector machine classi er. The reduction in data points used by an MSVM classi er over those used by a 1-norm SVM classi er averaged 66% on seven public datasets and was as high as 81%. This makes MSVM a useful incremental classi cation tool which maintains only a small fraction of a large dataset before merging and processing it with new incoming data

Similar works

This paper was published in Minds@University of Wisconsin.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.