3 research outputs found
Greedy regularized kernel interpolation
Kernel based regularized interpolation is a well known technique to
approximate a continuous multivariate function using a set of scattered data
points and the corresponding function evaluations, or data values. This method
has some advantage over exact interpolation: one can obtain the same
approximation order while solving a better conditioned linear system. This
method is well suited also for noisy data values, where exact interpolation is
not meaningful. Moreover, it allows more flexibility in the kernel choice,
since approximation problems can be solved also for non strictly positive
definite kernels. We discuss in this paper a greedy algorithm to compute a
sparse approximation of the kernel regularized interpolant. This sparsity is a
desirable property when the approximant is used as a surrogate of an expensive
function, since the resulting model is fast to evaluate. Moreover, we derive
convergence results for the approximation scheme, and we prove that a certain
greedy selection rule produces asymptotically quasi-optimal error rates
Deterministic error bounds for kernel-based learning techniques under bounded noise
We consider the problem of reconstructing a function from a finite set of
noise-corrupted samples. Two kernel algorithms are analyzed, namely kernel
ridge regression and -support vector regression. By assuming the
ground-truth function belongs to the reproducing kernel Hilbert space of the
chosen kernel, and the measurement noise affecting the dataset is bounded, we
adopt an approximation theory viewpoint to establish \textit{deterministic},
finite-sample error bounds for the two models. Finally, we discuss their
connection with Gaussian processes and two numerical examples are provided. In
establishing our inequalities, we hope to help bring the fields of
non-parametric kernel learning and system identification for robust control
closer to each other.Comment: 18 pages, 2 figure
Interpolation and Learning with Scale Dependent Kernels
We study the learning properties of nonparametric ridge-less least squares.
In particular, we consider the common case of estimators defined by scale
dependent kernels, and focus on the role of the scale. These estimators
interpolate the data and the scale can be shown to control their stability
through the condition number. Our analysis shows that are different regimes
depending on the interplay between the sample size, its dimensions, and the
smoothness of the problem. Indeed, when the sample size is less than
exponential in the data dimension, then the scale can be chosen so that the
learning error decreases. As the sample size becomes larger, the overall error
stop decreasing but interestingly the scale can be chosen in such a way that
the variance due to noise remains bounded. Our analysis combines, probabilistic
results with a number of analytic techniques from interpolation theory.Comment: The paper is not completed and contains parts which need to be
modifie