6,551 research outputs found
Model Selection for Support Vector Machine Classification
We address the problem of model selection for Support Vector Machine (SVM)
classification. For fixed functional form of the kernel, model selection
amounts to tuning kernel parameters and the slack penalty coefficient . We
begin by reviewing a recently developed probabilistic framework for SVM
classification. An extension to the case of SVMs with quadratic slack penalties
is given and a simple approximation for the evidence is derived, which can be
used as a criterion for model selection. We also derive the exact gradients of
the evidence in terms of posterior averages and describe how they can be
estimated numerically using Hybrid Monte Carlo techniques. Though
computationally demanding, the resulting gradient ascent algorithm is a useful
baseline tool for probabilistic SVM model selection, since it can locate maxima
of the exact (unapproximated) evidence. We then perform extensive experiments
on several benchmark data sets. The aim of these experiments is to compare the
performance of probabilistic model selection criteria with alternatives based
on estimates of the test error, namely the so-called ``span estimate'' and
Wahba's Generalized Approximate Cross-Validation (GACV) error. We find that all
the ``simple'' model criteria (Laplace evidence approximations, and the Span
and GACV error estimates) exhibit multiple local optima with respect to the
hyperparameters. While some of these give performance that is competitive with
results from other approaches in the literature, a significant fraction lead to
rather higher test errors. The results for the evidence gradient ascent method
show that also the exact evidence exhibits local optima, but these give test
errors which are much less variable and also consistently lower than for the
simpler model selection criteria
Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with Minimax Optimal Rates
We establish optimal convergence rates for a decomposition-based scalable
approach to kernel ridge regression. The method is simple to describe: it
randomly partitions a dataset of size N into m subsets of equal size, computes
an independent kernel ridge regression estimator for each subset, then averages
the local solutions into a global predictor. This partitioning leads to a
substantial reduction in computation time versus the standard approach of
performing kernel ridge regression on all N samples. Our two main theorems
establish that despite the computational speed-up, statistical optimality is
retained: as long as m is not too large, the partition-based estimator achieves
the statistical minimax rate over all estimators using the set of N samples. As
concrete examples, our theory guarantees that the number of processors m may
grow nearly linearly for finite-rank kernels and Gaussian kernels and
polynomially in N for Sobolev spaces, which in turn allows for substantial
reductions in computational cost. We conclude with experiments on both
simulated data and a music-prediction task that complement our theoretical
results, exhibiting the computational and statistical benefits of our approach
Algorithmic patterns for -matrices on many-core processors
In this work, we consider the reformulation of hierarchical ()
matrix algorithms for many-core processors with a model implementation on
graphics processing units (GPUs). matrices approximate specific
dense matrices, e.g., from discretized integral equations or kernel ridge
regression, leading to log-linear time complexity in dense matrix-vector
products. The parallelization of matrix operations on many-core
processors is difficult due to the complex nature of the underlying algorithms.
While previous algorithmic advances for many-core hardware focused on
accelerating existing matrix CPU implementations by many-core
processors, we here aim at totally relying on that processor type. As main
contribution, we introduce the necessary parallel algorithmic patterns allowing
to map the full matrix construction and the fast matrix-vector
product to many-core hardware. Here, crucial ingredients are space filling
curves, parallel tree traversal and batching of linear algebra operations. The
resulting model GPU implementation hmglib is the, to the best of the authors
knowledge, first entirely GPU-based Open Source matrix library of
this kind. We conclude this work by an in-depth performance analysis and a
comparative performance study against a standard matrix library,
highlighting profound speedups of our many-core parallel approach
Online Learning of Noisy Data with Kernels
We study online learning when individual instances are corrupted by
adversarially chosen random noise. We assume the noise distribution is unknown,
and may change over time with no restriction other than having zero mean and
bounded variance. Our technique relies on a family of unbiased estimators for
non-linear functions, which may be of independent interest. We show that a
variant of online gradient descent can learn functions in any dot-product
(e.g., polynomial) or Gaussian kernel space with any analytic convex loss
function. Our variant uses randomized estimates that need to query a random
number of noisy copies of each instance, where with high probability this
number is upper bounded by a constant. Allowing such multiple queries cannot be
avoided: Indeed, we show that online learning is in general impossible when
only one noisy copy of each instance can be accessed.Comment: This is a full version of the paper appearing in the 23rd
International Conference on Learning Theory (COLT 2010
- …