1,636 research outputs found
Fast inference in nonlinear dynamical systems using gradient matching
Parameter inference in mechanistic models of
coupled differential equations is a topical problem.
We propose a new method based on kernel
ridge regression and gradient matching, and
an objective function that simultaneously encourages
goodness of fit and penalises inconsistencies
with the differential equations. Fast minimisation
is achieved by exploiting partial convexity
inherent in this function, and setting up an iterative
algorithm in the vein of the EM algorithm.
An evaluation of the proposed method on various
benchmark data suggests that it compares
favourably with state-of-the-art alternatives
A Reproducing Kernel Perspective of Smoothing Spline Estimators
Spline functions have a long history as smoothers of noisy time series data, and several equivalent kernel representations have been proposed in terms of the Green's function solving the related boundary value problem. In this study we make use of the reproducing kernel property of the Green's function to obtain an hierarchy of time-invariant spline kernels of different order. The reproducing kernels give a good representation of smoothing splines for medium and long length filters, with a better performance of the asymmetric weights in terms of signal passing, noise suppression and revisions. Empirical comparisons of time-invariant filters are made with the classical non linear ones. The former are shown to loose part of their optimal properties when we fixed the length of the filter according to the noise to signal ratio as done in nonparametric seasonal adjustment procedures.equivalent kernels, nonparametric regression, Hilbert spaces, time series filtering, spectral properties
Extension of Wirtinger's Calculus to Reproducing Kernel Hilbert Spaces and the Complex Kernel LMS
Over the last decade, kernel methods for nonlinear processing have
successfully been used in the machine learning community. The primary
mathematical tool employed in these methods is the notion of the Reproducing
Kernel Hilbert Space. However, so far, the emphasis has been on batch
techniques. It is only recently, that online techniques have been considered in
the context of adaptive signal processing tasks. Moreover, these efforts have
only been focussed on real valued data sequences. To the best of our knowledge,
no adaptive kernel-based strategy has been developed, so far, for complex
valued signals. Furthermore, although the real reproducing kernels are used in
an increasing number of machine learning problems, complex kernels have not,
yet, been used, in spite of their potential interest in applications that deal
with complex signals, with Communications being a typical example. In this
paper, we present a general framework to attack the problem of adaptive
filtering of complex signals, using either real reproducing kernels, taking
advantage of a technique called \textit{complexification} of real RKHSs, or
complex reproducing kernels, highlighting the use of the complex gaussian
kernel. In order to derive gradients of operators that need to be defined on
the associated complex RKHSs, we employ the powerful tool of Wirtinger's
Calculus, which has recently attracted attention in the signal processing
community. To this end, in this paper, the notion of Wirtinger's calculus is
extended, for the first time, to include complex RKHSs and use it to derive
several realizations of the Complex Kernel Least-Mean-Square (CKLMS) algorithm.
Experiments verify that the CKLMS offers significant performance improvements
over several linear and nonlinear algorithms, when dealing with nonlinearities.Comment: 15 pages (double column), preprint of article accepted in IEEE Trans.
Sig. Pro
A Primer on Reproducing Kernel Hilbert Spaces
Reproducing kernel Hilbert spaces are elucidated without assuming prior
familiarity with Hilbert spaces. Compared with extant pedagogic material,
greater care is placed on motivating the definition of reproducing kernel
Hilbert spaces and explaining when and why these spaces are efficacious. The
novel viewpoint is that reproducing kernel Hilbert space theory studies
extrinsic geometry, associating with each geometric configuration a canonical
overdetermined coordinate system. This coordinate system varies continuously
with changing geometric configurations, making it well-suited for studying
problems whose solutions also vary continuously with changing geometry. This
primer can also serve as an introduction to infinite-dimensional linear algebra
because reproducing kernel Hilbert spaces have more properties in common with
Euclidean spaces than do more general Hilbert spaces.Comment: Revised version submitted to Foundations and Trends in Signal
Processin
A multi-level algorithm for the solution of moment problems
We study numerical methods for the solution of general linear moment
problems, where the solution belongs to a family of nested subspaces of a
Hilbert space. Multi-level algorithms, based on the conjugate gradient method
and the Landweber--Richardson method are proposed that determine the "optimal"
reconstruction level a posteriori from quantities that arise during the
numerical calculations. As an important example we discuss the reconstruction
of band-limited signals from irregularly spaced noisy samples, when the actual
bandwidth of the signal is not available. Numerical examples show the
usefulness of the proposed algorithms
Indirect Image Registration with Large Diffeomorphic Deformations
The paper adapts the large deformation diffeomorphic metric mapping framework
for image registration to the indirect setting where a template is registered
against a target that is given through indirect noisy observations. The
registration uses diffeomorphisms that transform the template through a (group)
action. These diffeomorphisms are generated by solving a flow equation that is
defined by a velocity field with certain regularity. The theoretical analysis
includes a proof that indirect image registration has solutions (existence)
that are stable and that converge as the data error tends so zero, so it
becomes a well-defined regularization method. The paper concludes with examples
of indirect image registration in 2D tomography with very sparse and/or highly
noisy data.Comment: 43 pages, 4 figures, 1 table; revise
Large-Scale Kernel Methods for Independence Testing
Representations of probability measures in reproducing kernel Hilbert spaces
provide a flexible framework for fully nonparametric hypothesis tests of
independence, which can capture any type of departure from independence,
including nonlinear associations and multivariate interactions. However, these
approaches come with an at least quadratic computational cost in the number of
observations, which can be prohibitive in many applications. Arguably, it is
exactly in such large-scale datasets that capturing any type of dependence is
of interest, so striking a favourable tradeoff between computational efficiency
and test performance for kernel independence tests would have a direct impact
on their applicability in practice. In this contribution, we provide an
extensive study of the use of large-scale kernel approximations in the context
of independence testing, contrasting block-based, Nystrom and random Fourier
feature approaches. Through a variety of synthetic data experiments, it is
demonstrated that our novel large scale methods give comparable performance
with existing methods whilst using significantly less computation time and
memory.Comment: 29 pages, 6 figure
Gradient matching methods for computational inference in mechanistic models for systems biology: a review and comparative analysis
Parameter inference in mathematical models of biological pathways, expressed as coupled ordinary differential equations (ODEs), is a challenging problem in contemporary systems biology. Conventional methods involve repeatedly solving the ODEs by numerical integration, which is computationally onerous and does not scale up to complex systems. Aimed at reducing the computational costs, new concepts based on gradient matching have recently been proposed in the computational statistics and machine learning literature. In a preliminary smoothing step, the time series data are interpolated; then, in a second step, the parameters of the ODEs are optimised so as to minimise some metric measuring the difference between the slopes of the tangents to the interpolants, and the time derivatives from the ODEs. In this way, the ODEs never have to be solved explicitly. This review provides a concise methodological overview of the current state-of-the-art methods for gradient matching in ODEs, followed by an empirical comparative evaluation based on a set of widely used and representative benchmark data
- …