57,775 research outputs found
Geometric Learning with Positively Decomposable Kernels
Kernel methods are powerful tools in machine learning. Classical kernel
methods are based on positive-definite kernels, which map data spaces into
reproducing kernel Hilbert spaces (RKHS). For non-Euclidean data spaces,
positive-definite kernels are difficult to come by. In this case, we propose
the use of reproducing kernel Krein space (RKKS) based methods, which require
only kernels that admit a positive decomposition. We show that one does not
need to access this decomposition in order to learn in RKKS. We then
investigate the conditions under which a kernel is positively decomposable. We
show that invariant kernels admit a positive decomposition on homogeneous
spaces under tractable regularity assumptions. This makes them much easier to
construct than positive-definite kernels, providing a route for learning with
kernels for non-Euclidean data. By the same token, this provides theoretical
foundations for RKKS-based methods in general
Exploring Large Feature Spaces with Hierarchical Multiple Kernel Learning
For supervised and unsupervised learning, positive definite kernels allow to
use large and potentially infinite dimensional feature spaces with a
computational cost that only depends on the number of observations. This is
usually done through the penalization of predictor functions by Euclidean or
Hilbertian norms. In this paper, we explore penalizing by sparsity-inducing
norms such as the l1-norm or the block l1-norm. We assume that the kernel
decomposes into a large sum of individual basis kernels which can be embedded
in a directed acyclic graph; we show that it is then possible to perform kernel
selection through a hierarchical multiple kernel learning framework, in
polynomial time in the number of selected kernels. This framework is naturally
applied to non linear variable selection; our extensive simulations on
synthetic datasets and datasets from the UCI repository show that efficiently
exploring the large feature space through sparsity-inducing norms leads to
state-of-the-art predictive performance
Regularized Regression Problem in hyper-RKHS for Learning Kernels
This paper generalizes the two-stage kernel learning framework, illustrates
its utility for kernel learning and out-of-sample extensions, and proves
{asymptotic} convergence results for the introduced kernel learning model.
Algorithmically, we extend target alignment by hyper-kernels in the two-stage
kernel learning framework. The associated kernel learning task is formulated as
a regression problem in a hyper-reproducing kernel Hilbert space (hyper-RKHS),
i.e., learning on the space of kernels itself. To solve this problem, we
present two regression models with bivariate forms in this space, including
kernel ridge regression (KRR) and support vector regression (SVR) in the
hyper-RKHS. By doing so, it provides significant model flexibility for kernel
learning with outstanding performance in real-world applications. Specifically,
our kernel learning framework is general, that is, the learned underlying
kernel can be positive definite or indefinite, which adapts to various
requirements in kernel learning. Theoretically, we study the convergence
behavior of these learning algorithms in the hyper-RKHS and derive the learning
rates. Different from the traditional approximation analysis in RKHS, our
analyses need to consider the non-trivial independence of pairwise samples and
the characterisation of hyper-RKHS. To the best of our knowledge, this is the
first work in learning theory to study the approximation performance of
regularized regression problem in hyper-RKHS.Comment: 25 pages, 3 figure
Hypernode Graphs for Learning from Binary Relations between Groups in Networks
International audienceThe aim of this paper is to propose methods for learning from interactions between groups in networks. We introduced hypernode graphs in Ricatte et al (2014) a formal model able to represent group interactions and able to infer individual properties as well. Spectral graph learning algorithms were extended to the case of hypern-ode graphs. As a proof-of-concept, we have shown how to model multiple players games with hypernode graphs and that spectral learning algorithms over hyper-node graphs obtain competitive results with skill ratings specialized algorithms. In this paper, we explore theoretical issues for hypernode graphs. We show that hypernode graph kernels strictly generalize over graph kernels and hypergraph kernels. We show that hypernode graphs correspond to signed graphs such that the matrix D − W is positive semidefinite. It should be noted that homophilic relations between groups may lead to non homophilic relations between individ-uals. Moreover, we also present some issues concerning random walks and the resistance distance for hypernode graphs
- …