60 research outputs found
The representer theorem for Hilbert spaces: a necessary and sufficient condition
A family of regularization functionals is said to admit a linear representer
theorem if every member of the family admits minimizers that lie in a fixed
finite dimensional subspace. A recent characterization states that a general
class of regularization functionals with differentiable regularizer admits a
linear representer theorem if and only if the regularization term is a
non-decreasing function of the norm. In this report, we improve over such
result by replacing the differentiability assumption with lower semi-continuity
and deriving a proof that is independent of the dimensionality of the space
A representer theorem for deep kernel learning
In this paper we provide a finite-sample and an infinite-sample representer
theorem for the concatenation of (linear combinations of) kernel functions of
reproducing kernel Hilbert spaces. These results serve as mathematical
foundation for the analysis of machine learning algorithms based on
compositions of functions. As a direct consequence in the finite-sample case,
the corresponding infinite-dimensional minimization problems can be recast into
(nonlinear) finite-dimensional minimization problems, which can be tackled with
nonlinear optimization algorithms. Moreover, we show how concatenated machine
learning problems can be reformulated as neural networks and how our
representer theorem applies to a broad class of state-of-the-art deep learning
methods
- …