2,483 research outputs found
A multi-level algorithm for the solution of moment problems
We study numerical methods for the solution of general linear moment
problems, where the solution belongs to a family of nested subspaces of a
Hilbert space. Multi-level algorithms, based on the conjugate gradient method
and the Landweber--Richardson method are proposed that determine the "optimal"
reconstruction level a posteriori from quantities that arise during the
numerical calculations. As an important example we discuss the reconstruction
of band-limited signals from irregularly spaced noisy samples, when the actual
bandwidth of the signal is not available. Numerical examples show the
usefulness of the proposed algorithms
Reproducing kernel Hilbert spaces and variable metric algorithms in PDE constrained shape optimisation
In this paper we investigate and compare different gradient algorithms
designed for the domain expression of the shape derivative. Our main focus is
to examine the usefulness of kernel reproducing Hilbert spaces for PDE
constrained shape optimisation problems. We show that radial kernels provide
convenient formulas for the shape gradient that can be efficiently used in
numerical simulations. The shape gradients associated with radial kernels
depend on a so called smoothing parameter that allows a smoothness adjustment
of the shape during the optimisation process. Besides, this smoothing parameter
can be used to modify the movement of the shape. The theoretical findings are
verified in a number of numerical experiments
Early stopping and non-parametric regression: An optimal data-dependent stopping rule
The strategy of early stopping is a regularization technique based on
choosing a stopping time for an iterative algorithm. Focusing on non-parametric
regression in a reproducing kernel Hilbert space, we analyze the early stopping
strategy for a form of gradient-descent applied to the least-squares loss
function. We propose a data-dependent stopping rule that does not involve
hold-out or cross-validation data, and we prove upper bounds on the squared
error of the resulting function estimate, measured in either the and
norm. These upper bounds lead to minimax-optimal rates for various
kernel classes, including Sobolev smoothness classes and other forms of
reproducing kernel Hilbert spaces. We show through simulation that our stopping
rule compares favorably to two other stopping rules, one based on hold-out data
and the other based on Stein's unbiased risk estimate. We also establish a
tight connection between our early stopping strategy and the solution path of a
kernel ridge regression estimator.Comment: 29 pages, 4 figure
Solving Support Vector Machines in Reproducing Kernel Banach Spaces with Positive Definite Functions
In this paper we solve support vector machines in reproducing kernel Banach
spaces with reproducing kernels defined on nonsymmetric domains instead of the
traditional methods in reproducing kernel Hilbert spaces. Using the
orthogonality of semi-inner-products, we can obtain the explicit
representations of the dual (normalized-duality-mapping) elements of support
vector machine solutions. In addition, we can introduce the reproduction
property in a generalized native space by Fourier transform techniques such
that it becomes a reproducing kernel Banach space, which can be even embedded
into Sobolev spaces, and its reproducing kernel is set up by the related
positive definite function. The representations of the optimal solutions of
support vector machines (regularized empirical risks) in these reproducing
kernel Banach spaces are formulated explicitly in terms of positive definite
functions, and their finite numbers of coefficients can be computed by fixed
point iteration. We also give some typical examples of reproducing kernel
Banach spaces induced by Mat\'ern functions (Sobolev splines) so that their
support vector machine solutions are well computable as the classical
algorithms. Moreover, each of their reproducing bases includes information from
multiple training data points. The concept of reproducing kernel Banach spaces
offers us a new numerical tool for solving support vector machines.Comment: 26 page
A stochastic behavior analysis of stochastic restricted-gradient descent algorithm in reproducing kernel Hilbert spaces
This paper presents a stochastic behavior analysis of a kernel-based
stochastic restricted-gradient descent method. The restricted gradient gives a
steepest ascent direction within the so-called dictionary subspace. The
analysis provides the transient and steady state performance in the mean
squared error criterion. It also includes stability conditions in the mean and
mean-square sense. The present study is based on the analysis of the kernel
normalized least mean square (KNLMS) algorithm initially proposed by Chen et
al. Simulation results validate the analysis
Convergence rates of Kernel Conjugate Gradient for random design regression
We prove statistical rates of convergence for kernel-based least squares
regression from i.i.d. data using a conjugate gradient algorithm, where
regularization against overfitting is obtained by early stopping. This method
is related to Kernel Partial Least Squares, a regression method that combines
supervised dimensionality reduction with least squares projection. Following
the setting introduced in earlier related literature, we study so-called "fast
convergence rates" depending on the regularity of the target regression
function (measured by a source condition in terms of the kernel integral
operator) and on the effective dimensionality of the data mapped into the
kernel space. We obtain upper bounds, essentially matching known minimax lower
bounds, for the (prediction) norm as well as for the stronger
Hilbert norm, if the true regression function belongs to the reproducing kernel
Hilbert space. If the latter assumption is not fulfilled, we obtain similar
convergence rates for appropriate norms, provided additional unlabeled data are
available
Learning with SGD and Random Features
Sketching and stochastic gradient methods are arguably the most common
techniques to derive efficient large scale learning algorithms. In this paper,
we investigate their application in the context of nonparametric statistical
learning. More precisely, we study the estimator defined by stochastic gradient
with mini batches and random features. The latter can be seen as form of
nonlinear sketching and used to define approximate kernel methods. The
considered estimator is not explicitly penalized/constrained and regularization
is implicit. Indeed, our study highlights how different parameters, such as
number of features, iterations, step-size and mini-batch size control the
learning properties of the solutions. We do this by deriving optimal finite
sample bounds, under standard assumptions. The obtained results are
corroborated and illustrated by numerical experiments
Fast inference in nonlinear dynamical systems using gradient matching
Parameter inference in mechanistic models of
coupled differential equations is a topical problem.
We propose a new method based on kernel
ridge regression and gradient matching, and
an objective function that simultaneously encourages
goodness of fit and penalises inconsistencies
with the differential equations. Fast minimisation
is achieved by exploiting partial convexity
inherent in this function, and setting up an iterative
algorithm in the vein of the EM algorithm.
An evaluation of the proposed method on various
benchmark data suggests that it compares
favourably with state-of-the-art alternatives
- …