90 research outputs found
Training Support Vector Machines Using Frank-Wolfe Optimization Methods
Training a Support Vector Machine (SVM) requires the solution of a quadratic
programming problem (QP) whose computational complexity becomes prohibitively
expensive for large scale datasets. Traditional optimization methods cannot be
directly applied in these cases, mainly due to memory restrictions.
By adopting a slightly different objective function and under mild conditions
on the kernel used within the model, efficient algorithms to train SVMs have
been devised under the name of Core Vector Machines (CVMs). This framework
exploits the equivalence of the resulting learning problem with the task of
building a Minimal Enclosing Ball (MEB) problem in a feature space, where data
is implicitly embedded by a kernel function.
In this paper, we improve on the CVM approach by proposing two novel methods
to build SVMs based on the Frank-Wolfe algorithm, recently revisited as a fast
method to approximate the solution of a MEB problem. In contrast to CVMs, our
algorithms do not require to compute the solutions of a sequence of
increasingly complex QPs and are defined by using only analytic optimization
steps. Experiments on a large collection of datasets show that our methods
scale better than CVMs in most cases, sometimes at the price of a slightly
lower accuracy. As CVMs, the proposed methods can be easily extended to machine
learning problems other than binary classification. However, effective
classifiers are also obtained using kernels which do not satisfy the condition
required by CVMs and can thus be used for a wider set of problems
A Novel Frank-Wolfe Algorithm. Analysis and Applications to Large-Scale SVM Training
Recently, there has been a renewed interest in the machine learning community
for variants of a sparse greedy approximation procedure for concave
optimization known as {the Frank-Wolfe (FW) method}. In particular, this
procedure has been successfully applied to train large-scale instances of
non-linear Support Vector Machines (SVMs). Specializing FW to SVM training has
allowed to obtain efficient algorithms but also important theoretical results,
including convergence analysis of training algorithms and new characterizations
of model sparsity.
In this paper, we present and analyze a novel variant of the FW method based
on a new way to perform away steps, a classic strategy used to accelerate the
convergence of the basic FW procedure. Our formulation and analysis is focused
on a general concave maximization problem on the simplex. However, the
specialization of our algorithm to quadratic forms is strongly related to some
classic methods in computational geometry, namely the Gilbert and MDM
algorithms.
On the theoretical side, we demonstrate that the method matches the
guarantees in terms of convergence rate and number of iterations obtained by
using classic away steps. In particular, the method enjoys a linear rate of
convergence, a result that has been recently proved for MDM on quadratic forms.
On the practical side, we provide experiments on several classification
datasets, and evaluate the results using statistical tests. Experiments show
that our method is faster than the FW method with classic away steps, and works
well even in the cases in which classic away steps slow down the algorithm.
Furthermore, these improvements are obtained without sacrificing the predictive
accuracy of the obtained SVM model.Comment: REVISED VERSION (October 2013) -- Title and abstract have been
revised. Section 5 was added. Some proofs have been summarized (full-length
proofs available in the previous version
Stability Yields Sublinear Time Algorithms for Geometric Optimization in Machine Learning
In this paper, we study several important geometric optimization problems arising in machine learning. First, we revisit the Minimum Enclosing Ball (MEB) problem in Euclidean space ?^d. The problem has been extensively studied before, but real-world machine learning tasks often need to handle large-scale datasets so that we cannot even afford linear time algorithms. Motivated by the recent developments on beyond worst-case analysis, we introduce the notion of stability for MEB, which is natural and easy to understand. Roughly speaking, an instance of MEB is stable, if the radius of the resulting ball cannot be significantly reduced by removing a small fraction of the input points. Under the stability assumption, we present two sampling algorithms for computing radius-approximate MEB with sample complexities independent of the number of input points n. In particular, the second algorithm has the sample complexity even independent of the dimensionality d. We also consider the general case without the stability assumption. We present a hybrid algorithm that can output either a radius-approximate MEB or a covering-approximate MEB, which improves the running time and the number of passes for the previous sublinear MEB algorithms. Further, we extend our proposed notion of stability and design sublinear time algorithms for other geometric optimization problems including MEB with outliers, polytope distance, one-class and two-class linear SVMs (without or with outliers). Our proposed algorithms also work fine for kernels
Near-Optimal Evasion of Convex-Inducing Classifiers
Classifiers are often used to detect miscreant activities. We study how an
adversary can efficiently query a classifier to elicit information that allows
the adversary to evade detection at near-minimal cost. We generalize results of
Lowd and Meek (2005) to convex-inducing classifiers. We present algorithms that
construct undetected instances of near-minimal cost using only polynomially
many queries in the dimension of the space and without reverse engineering the
decision boundary.Comment: 8 pages; to appear at AISTATS'201
A PARTAN-Accelerated Frank-Wolfe Algorithm for Large-Scale SVM Classification
Frank-Wolfe algorithms have recently regained the attention of the Machine
Learning community. Their solid theoretical properties and sparsity guarantees
make them a suitable choice for a wide range of problems in this field. In
addition, several variants of the basic procedure exist that improve its
theoretical properties and practical performance. In this paper, we investigate
the application of some of these techniques to Machine Learning, focusing in
particular on a Parallel Tangent (PARTAN) variant of the FW algorithm that has
not been previously suggested or studied for this type of problems. We provide
experiments both in a standard setting and using a stochastic speed-up
technique, showing that the considered algorithms obtain promising results on
several medium and large-scale benchmark datasets for SVM classification
Fast SVM training using approximate extreme points
Applications of non-linear kernel Support Vector Machines (SVMs) to large
datasets is seriously hampered by its excessive training time. We propose a
modification, called the approximate extreme points support vector machine
(AESVM), that is aimed at overcoming this burden. Our approach relies on
conducting the SVM optimization over a carefully selected subset, called the
representative set, of the training dataset. We present analytical results that
indicate the similarity of AESVM and SVM solutions. A linear time algorithm
based on convex hulls and extreme points is used to compute the representative
set in kernel space. Extensive computational experiments on nine datasets
compared AESVM to LIBSVM \citep{LIBSVM}, CVM \citep{Tsang05}, BVM
\citep{Tsang07}, LASVM \citep{Bordes05},
\citep{Joachims09}, and the random features method \citep{rahimi07}. Our AESVM
implementation was found to train much faster than the other methods, while its
classification accuracy was similar to that of LIBSVM in all cases. In
particular, for a seizure detection dataset, AESVM training was almost
times faster than LIBSVM and LASVM and more than forty times faster than CVM
and BVM. Additionally, AESVM also gave competitively fast classification times.Comment: The manuscript in revised form has been submitted to J. Machine
Learning Researc
- …