9,549 research outputs found
Lipschitz Optimisation for Lipschitz Interpolation
Techniques known as Nonlinear Set Membership prediction, Kinky Inference or
Lipschitz Interpolation are fast and numerically robust approaches to
nonparametric machine learning that have been proposed to be utilised in the
context of system identification and learning-based control. They utilise
presupposed Lipschitz properties in order to compute inferences over unobserved
function values. Unfortunately, most of these approaches rely on exact
knowledge about the input space metric as well as about the Lipschitz constant.
Furthermore, existing techniques to estimate the Lipschitz constants from the
data are not robust to noise or seem to be ad-hoc and typically are decoupled
from the ultimate learning and prediction task. To overcome these limitations,
we propose an approach for optimising parameters of the presupposed metrics by
minimising validation set prediction errors. To avoid poor performance due to
local minima, we propose to utilise Lipschitz properties of the optimisation
objective to ensure global optimisation success. The resulting approach is a
new flexible method for nonparametric black-box learning. We provide
experimental evidence of the competitiveness of our approach on artificial as
well as on real data
Recommended from our members
Nonparametric regression analysis
textNonparametric regression uses nonparametric and flexible methods in analyzing complex data with unknown regression relationships by imposing minimum assumptions on the regression function. The theory and applications of nonparametric regression methods with an emphasis on kernel regression, smoothing spines and Gaussian process regression are reviewed in this report. Two datasets are analyzed to demonstrate and compare the three nonparametric regression models in R.Statistic
String and Membrane Gaussian Processes
In this paper we introduce a novel framework for making exact nonparametric
Bayesian inference on latent functions, that is particularly suitable for Big
Data tasks. Firstly, we introduce a class of stochastic processes we refer to
as string Gaussian processes (string GPs), which are not to be mistaken for
Gaussian processes operating on text. We construct string GPs so that their
finite-dimensional marginals exhibit suitable local conditional independence
structures, which allow for scalable, distributed, and flexible nonparametric
Bayesian inference, without resorting to approximations, and while ensuring
some mild global regularity constraints. Furthermore, string GP priors
naturally cope with heterogeneous input data, and the gradient of the learned
latent function is readily available for explanatory analysis. Secondly, we
provide some theoretical results relating our approach to the standard GP
paradigm. In particular, we prove that some string GPs are Gaussian processes,
which provides a complementary global perspective on our framework. Finally, we
derive a scalable and distributed MCMC scheme for supervised learning tasks
under string GP priors. The proposed MCMC scheme has computational time
complexity and memory requirement , where
is the data size and the dimension of the input space. We illustrate the
efficacy of the proposed approach on several synthetic and real-world datasets,
including a dataset with millions input points and attributes.Comment: To appear in the Journal of Machine Learning Research (JMLR), Volume
1
- …