289 research outputs found
A Reduction of the Elastic Net to Support Vector Machines with an Application to GPU Computing
The past years have witnessed many dedicated open-source projects that built
and maintain implementations of Support Vector Machines (SVM), parallelized for
GPU, multi-core CPUs and distributed systems. Up to this point, no comparable
effort has been made to parallelize the Elastic Net, despite its popularity in
many high impact applications, including genetics, neuroscience and systems
biology. The first contribution in this paper is of theoretical nature. We
establish a tight link between two seemingly different algorithms and prove
that Elastic Net regression can be reduced to SVM with squared hinge loss
classification. Our second contribution is to derive a practical algorithm
based on this reduction. The reduction enables us to utilize prior efforts in
speeding up and parallelizing SVMs to obtain a highly optimized and parallel
solver for the Elastic Net and Lasso. With a simple wrapper, consisting of only
11 lines of MATLAB code, we obtain an Elastic Net implementation that naturally
utilizes GPU and multi-core CPUs. We demonstrate on twelve real world data
sets, that our algorithm yields identical results as the popular (and highly
optimized) glmnet implementation but is one or several orders of magnitude
faster.Comment: 10 page
A PARTAN-Accelerated Frank-Wolfe Algorithm for Large-Scale SVM Classification
Frank-Wolfe algorithms have recently regained the attention of the Machine
Learning community. Their solid theoretical properties and sparsity guarantees
make them a suitable choice for a wide range of problems in this field. In
addition, several variants of the basic procedure exist that improve its
theoretical properties and practical performance. In this paper, we investigate
the application of some of these techniques to Machine Learning, focusing in
particular on a Parallel Tangent (PARTAN) variant of the FW algorithm that has
not been previously suggested or studied for this type of problems. We provide
experiments both in a standard setting and using a stochastic speed-up
technique, showing that the considered algorithms obtain promising results on
several medium and large-scale benchmark datasets for SVM classification
Large Margin Object Tracking with Circulant Feature Maps
Structured output support vector machine (SVM) based tracking algorithms have
shown favorable performance recently. Nonetheless, the time-consuming candidate
sampling and complex optimization limit their real-time applications. In this
paper, we propose a novel large margin object tracking method which absorbs the
strong discriminative ability from structured output SVM and speeds up by the
correlation filter algorithm significantly. Secondly, a multimodal target
detection technique is proposed to improve the target localization precision
and prevent model drift introduced by similar objects or background noise.
Thirdly, we exploit the feedback from high-confidence tracking results to avoid
the model corruption problem. We implement two versions of the proposed tracker
with the representations from both conventional hand-crafted and deep
convolution neural networks (CNNs) based features to validate the strong
compatibility of the algorithm. The experimental results demonstrate that the
proposed tracker performs superiorly against several state-of-the-art
algorithms on the challenging benchmark sequences while runs at speed in excess
of 80 frames per second. The source code and experimental results will be made
publicly available
Block-Coordinate Frank-Wolfe Optimization for Structural SVMs
We propose a randomized block-coordinate variant of the classic Frank-Wolfe
algorithm for convex optimization with block-separable constraints. Despite its
lower iteration cost, we show that it achieves a similar convergence rate in
duality gap as the full Frank-Wolfe algorithm. We also show that, when applied
to the dual structural support vector machine (SVM) objective, this yields an
online algorithm that has the same low iteration complexity as primal
stochastic subgradient methods. However, unlike stochastic subgradient methods,
the block-coordinate Frank-Wolfe algorithm allows us to compute the optimal
step-size and yields a computable duality gap guarantee. Our experiments
indicate that this simple algorithm outperforms competing structural SVM
solvers.Comment: Appears in Proceedings of the 30th International Conference on
Machine Learning (ICML 2013). 9 pages main text + 22 pages appendix. Changes
from v3 to v4: 1) Re-organized appendix; improved & clarified duality gap
proofs; re-drew all plots; 2) Changed convention for Cf definition; 3) Added
weighted averaging experiments + convergence results; 4) Clarified main text
and relationship with appendi
- …