747 research outputs found
Training Support Vector Machines Using Frank-Wolfe Optimization Methods
Training a Support Vector Machine (SVM) requires the solution of a quadratic
programming problem (QP) whose computational complexity becomes prohibitively
expensive for large scale datasets. Traditional optimization methods cannot be
directly applied in these cases, mainly due to memory restrictions.
By adopting a slightly different objective function and under mild conditions
on the kernel used within the model, efficient algorithms to train SVMs have
been devised under the name of Core Vector Machines (CVMs). This framework
exploits the equivalence of the resulting learning problem with the task of
building a Minimal Enclosing Ball (MEB) problem in a feature space, where data
is implicitly embedded by a kernel function.
In this paper, we improve on the CVM approach by proposing two novel methods
to build SVMs based on the Frank-Wolfe algorithm, recently revisited as a fast
method to approximate the solution of a MEB problem. In contrast to CVMs, our
algorithms do not require to compute the solutions of a sequence of
increasingly complex QPs and are defined by using only analytic optimization
steps. Experiments on a large collection of datasets show that our methods
scale better than CVMs in most cases, sometimes at the price of a slightly
lower accuracy. As CVMs, the proposed methods can be easily extended to machine
learning problems other than binary classification. However, effective
classifiers are also obtained using kernels which do not satisfy the condition
required by CVMs and can thus be used for a wider set of problems
Information Geometric Security Analysis of Differential Phase Shift Quantum Key Distribution Protocol
This paper analyzes the information-theoretical security of the Differential
Phase Shift (DPS) Quantum Key Distribution (QKD) protocol, using efficient
computational information geometric algorithms. The DPS QKD protocol was
introduced for practical reasons, since the earlier QKD schemes were too
complicated to implement in practice. The DPS QKD protocol can be an integrated
part of current network security applications, hence it's practical
implementation is much easier with the current optical devices and optical
networks. The proposed algorithm could be a very valuable tool to answer the
still open questions related to the security bounds of the DPS QKD protocol.Comment: 42 pages, 34 figures, Journal-ref: Security and Communication
Networks (John Wiley & Sons, 2012), presented in part at the IEEE Int.
Conference on Network and Service Security (IEEE N2S 2009
On Approximating the Riemannian 1-Center
International audienceIn this paper, we generalize the simple Euclidean 1-center approximation algorithm of Badoiu and Clarkson (2003) to Riemannian geometries and study accordingly the convergence rate. We then show how to instantiate this generic algorithm to two particular cases: (1) hyperbolic geometry, and (2) Riemannian manifold of symmetric positive definite matrices
Optimization Algorithms for Faster Computational Geometry
We study two fundamental problems in computational geometry: finding the
maximum inscribed ball (MaxIB) inside a bounded polyhedron defined by
hyperplanes, and the minimum enclosing ball (MinEB) of a set of points,
both in -dimensional space. We improve the running time of iterative
algorithms on
MaxIB from to , a speed-up up to , and
MinEB from to , a speed-up up to .
Our improvements are based on a novel saddle-point optimization framework. We
propose a new algorithm for solving a class of
regularized saddle-point problems, and apply a randomized Hadamard space
rotation which is a technique borrowed from compressive sensing. Interestingly,
the motivation of using Hadamard rotation solely comes from our optimization
view but not the original geometry problem: indeed, it is not immediately clear
why MaxIB or MinEB, as a geometric problem, should be easier to solve if we
rotate the space by a unitary matrix. We hope that our optimization perspective
sheds lights on solving other geometric problems as well.Comment: An abstract of this paper is going to appear in the conference
proceedings of ICALP 201
Fast SVM training using approximate extreme points
Applications of non-linear kernel Support Vector Machines (SVMs) to large
datasets is seriously hampered by its excessive training time. We propose a
modification, called the approximate extreme points support vector machine
(AESVM), that is aimed at overcoming this burden. Our approach relies on
conducting the SVM optimization over a carefully selected subset, called the
representative set, of the training dataset. We present analytical results that
indicate the similarity of AESVM and SVM solutions. A linear time algorithm
based on convex hulls and extreme points is used to compute the representative
set in kernel space. Extensive computational experiments on nine datasets
compared AESVM to LIBSVM \citep{LIBSVM}, CVM \citep{Tsang05}, BVM
\citep{Tsang07}, LASVM \citep{Bordes05},
\citep{Joachims09}, and the random features method \citep{rahimi07}. Our AESVM
implementation was found to train much faster than the other methods, while its
classification accuracy was similar to that of LIBSVM in all cases. In
particular, for a seizure detection dataset, AESVM training was almost
times faster than LIBSVM and LASVM and more than forty times faster than CVM
and BVM. Additionally, AESVM also gave competitively fast classification times.Comment: The manuscript in revised form has been submitted to J. Machine
Learning Researc
- …