9,666 research outputs found
Fast SVM training using approximate extreme points
Applications of non-linear kernel Support Vector Machines (SVMs) to large
datasets is seriously hampered by its excessive training time. We propose a
modification, called the approximate extreme points support vector machine
(AESVM), that is aimed at overcoming this burden. Our approach relies on
conducting the SVM optimization over a carefully selected subset, called the
representative set, of the training dataset. We present analytical results that
indicate the similarity of AESVM and SVM solutions. A linear time algorithm
based on convex hulls and extreme points is used to compute the representative
set in kernel space. Extensive computational experiments on nine datasets
compared AESVM to LIBSVM \citep{LIBSVM}, CVM \citep{Tsang05}, BVM
\citep{Tsang07}, LASVM \citep{Bordes05},
\citep{Joachims09}, and the random features method \citep{rahimi07}. Our AESVM
implementation was found to train much faster than the other methods, while its
classification accuracy was similar to that of LIBSVM in all cases. In
particular, for a seizure detection dataset, AESVM training was almost
times faster than LIBSVM and LASVM and more than forty times faster than CVM
and BVM. Additionally, AESVM also gave competitively fast classification times.Comment: The manuscript in revised form has been submitted to J. Machine
Learning Researc
Similarity Learning for High-Dimensional Sparse Data
A good measure of similarity between data points is crucial to many tasks in
machine learning. Similarity and metric learning methods learn such measures
automatically from data, but they do not scale well respect to the
dimensionality of the data. In this paper, we propose a method that can learn
efficiently similarity measure from high-dimensional sparse data. The core idea
is to parameterize the similarity measure as a convex combination of rank-one
matrices with specific sparsity structures. The parameters are then optimized
with an approximate Frank-Wolfe procedure to maximally satisfy relative
similarity constraints on the training data. Our algorithm greedily
incorporates one pair of features at a time into the similarity measure,
providing an efficient way to control the number of active features and thus
reduce overfitting. It enjoys very appealing convergence guarantees and its
time and memory complexity depends on the sparsity of the data instead of the
dimension of the feature space. Our experiments on real-world high-dimensional
datasets demonstrate its potential for classification, dimensionality reduction
and data exploration.Comment: 14 pages. Proceedings of the 18th International Conference on
Artificial Intelligence and Statistics (AISTATS 2015). Matlab code:
https://github.com/bellet/HDS
- …