9,835 research outputs found
A theoretical framework for supervised learning from regions
Supervised learning is investigated, when the data are represented not only by labeled points but also labeled regions of the input space. In the limit case, such
regions degenerate to single points and the proposed approach changes back to the classical learning context. The adopted framework entails the minimization
of a functional obtained by introducing a loss function that involves such regions. An additive regularization term is expressed via differential operators that model
the smoothness properties of the desired input/output relationship. Representer
theorems are given, proving that the optimization problem associated to learning
from labeled regions has a unique solution, which takes on the form of a linear
combination of kernel functions determined by the differential operators together
with the regions themselves. As a relevant situation, the case of regions given
by multi-dimensional intervals (i.e., “boxes”) is investigated, which models prior
knowledge expressed by logical propositions
Learning Sets with Separating Kernels
We consider the problem of learning a set from random samples. We show how
relevant geometric and topological properties of a set can be studied
analytically using concepts from the theory of reproducing kernel Hilbert
spaces. A new kind of reproducing kernel, that we call separating kernel, plays
a crucial role in our study and is analyzed in detail. We prove a new analytic
characterization of the support of a distribution, that naturally leads to a
family of provably consistent regularized learning algorithms and we discuss
the stability of these methods with respect to random sampling. Numerical
experiments show that the approach is competitive, and often better, than other
state of the art techniques.Comment: final versio
Support vector machine for functional data classification
In many applications, input data are sampled functions taking their values in
infinite dimensional spaces rather than standard vectors. This fact has complex
consequences on data analysis algorithms that motivate modifications of them.
In fact most of the traditional data analysis tools for regression,
classification and clustering have been adapted to functional inputs under the
general name of functional Data Analysis (FDA). In this paper, we investigate
the use of Support Vector Machines (SVMs) for functional data analysis and we
focus on the problem of curves discrimination. SVMs are large margin classifier
tools based on implicit non linear mappings of the considered data into high
dimensional spaces thanks to kernels. We show how to define simple kernels that
take into account the unctional nature of the data and lead to consistent
classification. Experiments conducted on real world data emphasize the benefit
of taking into account some functional aspects of the problems.Comment: 13 page
A Unifying Framework in Vector-valued Reproducing Kernel Hilbert Spaces for Manifold Regularization and Co-Regularized Multi-view Learning
This paper presents a general vector-valued reproducing kernel Hilbert spaces
(RKHS) framework for the problem of learning an unknown functional dependency
between a structured input space and a structured output space. Our formulation
encompasses both Vector-valued Manifold Regularization and Co-regularized
Multi-view Learning, providing in particular a unifying framework linking these
two important learning approaches. In the case of the least square loss
function, we provide a closed form solution, which is obtained by solving a
system of linear equations. In the case of Support Vector Machine (SVM)
classification, our formulation generalizes in particular both the binary
Laplacian SVM to the multi-class, multi-view settings and the multi-class
Simplex Cone SVM to the semi-supervised, multi-view settings. The solution is
obtained by solving a single quadratic optimization problem, as in standard
SVM, via the Sequential Minimal Optimization (SMO) approach. Empirical results
obtained on the task of object recognition, using several challenging datasets,
demonstrate the competitiveness of our algorithms compared with other
state-of-the-art methods.Comment: 72 page
Recent Results Regarding Affine Quantum Gravity
Recent progress in the quantization of nonrenormalizable scalar fields has
found that a suitable non-classical modification of the ground state wave
function leads to a result that eliminates term-by-term divergences that arise
in a conventional perturbation analysis. After a brief review of both the
scalar field story and the affine quantum gravity program, examination of the
procedures used in the latter surprisingly shows an analogous formulation which
already implies that affine quantum gravity is not plagued by divergences that
arise in a standard perturbation study. Additionally, guided by the projection
operator method to deal with quantum constraints, trial reproducing kernels are
introduced that satisfy the diffeomorphism constraints. Furthermore, it is
argued that the trial reproducing kernels for the diffeomorphism constraints
may also satisfy the Hamiltonian constraint as well.Comment: 32 pages, new features in this alternative approach to quantize
gravity, minor typos plus an improved argument in Sec. 9 suggested by Karel
Kucha
A New Approach to Collaborative Filtering: Operator Estimation with Spectral Regularization
We present a general approach for collaborative filtering (CF) using spectral
regularization to learn linear operators from "users" to the "objects" they
rate. Recent low-rank type matrix completion approaches to CF are shown to be
special cases. However, unlike existing regularization based CF methods, our
approach can be used to also incorporate information such as attributes of the
users or the objects -- a limitation of existing regularization based CF
methods. We then provide novel representer theorems that we use to develop new
estimation methods. We provide learning algorithms based on low-rank
decompositions, and test them on a standard CF dataset. The experiments
indicate the advantages of generalizing the existing regularization based CF
methods to incorporate related information about users and objects. Finally, we
show that certain multi-task learning methods can be also seen as special cases
of our proposed approach
- …