2 research outputs found
Feature Selection through Minimization of the VC dimension
Feature selection involes identifying the most relevant subset of input
features, with a view to improving generalization of predictive models by
reducing overfitting. Directly searching for the most relevant combination of
attributes is NP-hard. Variable selection is of critical importance in many
applications, such as micro-array data analysis, where selecting a small number
of discriminative features is crucial to developing useful models of disease
mechanisms, as well as for prioritizing targets for drug discovery. The
recently proposed Minimal Complexity Machine (MCM) provides a way to learn a
hyperplane classifier by minimizing an exact (\boldmath{}) bound on its
VC dimension. It is well known that a lower VC dimension contributes to good
generalization. For a linear hyperplane classifier in the input space, the VC
dimension is upper bounded by the number of features; hence, a linear
classifier with a small VC dimension is parsimonious in the set of features it
employs. In this paper, we use the linear MCM to learn a classifier in which a
large number of weights are zero; features with non-zero weights are the ones
that are chosen. Selected features are used to learn a kernel SVM classifier.
On a number of benchmark datasets, the features chosen by the linear MCM yield
comparable or better test set accuracy than when methods such as ReliefF and
FCBF are used for the task. The linear MCM typically chooses one-tenth the
number of attributes chosen by the other methods; on some very high dimensional
datasets, the MCM chooses about of the features; in comparison, ReliefF
and FCBF choose 70 to 140 times more features, thus demonstrating that
minimizing the VC dimension may provide a new, and very effective route for
feature selection and for learning sparse representations.Comment: arXiv admin note: text overlap with arXiv:1410.457
A Neurodynamical System for finding a Minimal VC Dimension Classifier
The recently proposed Minimal Complexity Machine (MCM) finds a hyperplane
classifier by minimizing an exact bound on the Vapnik-Chervonenkis (VC)
dimension. The VC dimension measures the capacity of a learning machine, and a
smaller VC dimension leads to improved generalization. On many benchmark
datasets, the MCM generalizes better than SVMs and uses far fewer support
vectors than the number used by SVMs. In this paper, we describe a neural
network based on a linear dynamical system, that converges to the MCM solution.
The proposed MCM dynamical system is conducive to an analogue circuit
implementation on a chip or simulation using Ordinary Differential Equation
(ODE) solvers. Numerical experiments on benchmark datasets from the UCI
repository show that the proposed approach is scalable and accurate, as we
obtain improved accuracies and fewer number of support vectors (upto 74.3%
reduction) with the MCM dynamical system.Comment: 15 pages, 3 figure