9,935 research outputs found
Asymptotic learning curves of kernel methods: empirical data v.s. Teacher-Student paradigm
How many training data are needed to learn a supervised task? It is often
observed that the generalization error decreases as where is
the number of training examples and an exponent that depends on both
data and algorithm. In this work we measure when applying kernel
methods to real datasets. For MNIST we find and for CIFAR10
, for both regression and classification tasks, and for
Gaussian or Laplace kernels. To rationalize the existence of non-trivial
exponents that can be independent of the specific kernel used, we study the
Teacher-Student framework for kernels. In this scheme, a Teacher generates data
according to a Gaussian random field, and a Student learns them via kernel
regression. With a simplifying assumption -- namely that the data are sampled
from a regular lattice -- we derive analytically for translation
invariant kernels, using previous results from the kriging literature. Provided
that the Student is not too sensitive to high frequencies, depends only
on the smoothness and dimension of the training data. We confirm numerically
that these predictions hold when the training points are sampled at random on a
hypersphere. Overall, the test error is found to be controlled by the magnitude
of the projection of the true function on the kernel eigenvectors whose rank is
larger than . Using this idea we predict relate the exponent to an
exponent describing how the coefficients of the true function in the
eigenbasis of the kernel decay with rank. We extract from real data by
performing kernel PCA, leading to for MNIST and
for CIFAR10, in good agreement with observations. We argue
that these rather large exponents are possible due to the small effective
dimension of the data.Comment: We added (i) the prediction of the exponent for real data
using kernel PCA; (ii) the generalization of our results to non-Gaussian data
from reference [11] (Bordelon et al., "Spectrum Dependent Learning Curves in
Kernel Regression and Wide Neural Networks"
Kernel-Based Just-In-Time Learning for Passing Expectation Propagation Messages
We propose an efficient nonparametric strategy for learning a message
operator in expectation propagation (EP), which takes as input the set of
incoming messages to a factor node, and produces an outgoing message as output.
This learned operator replaces the multivariate integral required in classical
EP, which may not have an analytic expression. We use kernel-based regression,
which is trained on a set of probability distributions representing the
incoming messages, and the associated outgoing messages. The kernel approach
has two main advantages: first, it is fast, as it is implemented using a novel
two-layer random feature representation of the input message distributions;
second, it has principled uncertainty estimates, and can be cheaply updated
online, meaning it can request and incorporate new training data when it
encounters inputs on which it is uncertain. In experiments, our approach is
able to solve learning problems where a single message operator is required for
multiple, substantially different data sets (logistic regression for a variety
of classification problems), where it is essential to accurately assess
uncertainty and to efficiently and robustly update the message operator.Comment: accepted to UAI 2015. Correct typos. Add more content to the
appendix. Main results unchange
Learning with Algebraic Invariances, and the Invariant Kernel Trick
When solving data analysis problems it is important to integrate prior
knowledge and/or structural invariances. This paper contributes by a novel
framework for incorporating algebraic invariance structure into kernels. In
particular, we show that algebraic properties such as sign symmetries in data,
phase independence, scaling etc. can be included easily by essentially
performing the kernel trick twice. We demonstrate the usefulness of our theory
in simulations on selected applications such as sign-invariant spectral
clustering and underdetermined ICA
Kernelized Hashcode Representations for Relation Extraction
Kernel methods have produced state-of-the-art results for a number of NLP
tasks such as relation extraction, but suffer from poor scalability due to the
high cost of computing kernel similarities between natural language structures.
A recently proposed technique, kernelized locality-sensitive hashing (KLSH),
can significantly reduce the computational cost, but is only applicable to
classifiers operating on kNN graphs. Here we propose to use random subspaces of
KLSH codes for efficiently constructing an explicit representation of NLP
structures suitable for general classification methods. Further, we propose an
approach for optimizing the KLSH model for classification problems by
maximizing an approximation of mutual information between the KLSH codes
(feature vectors) and the class labels. We evaluate the proposed approach on
biomedical relation extraction datasets, and observe significant and robust
improvements in accuracy w.r.t. state-of-the-art classifiers, along with
drastic (orders-of-magnitude) speedup compared to conventional kernel methods.Comment: To appear in the proceedings of conference, AAAI-1
- …