35,824 research outputs found
Sequential Gaussian Processes for Online Learning of Nonstationary Functions
Many machine learning problems can be framed in the context of estimating
functions, and often these are time-dependent functions that are estimated in
real-time as observations arrive. Gaussian processes (GPs) are an attractive
choice for modeling real-valued nonlinear functions due to their flexibility
and uncertainty quantification. However, the typical GP regression model
suffers from several drawbacks: i) Conventional GP inference scales
with respect to the number of observations; ii) updating a GP model
sequentially is not trivial; and iii) covariance kernels often enforce
stationarity constraints on the function, while GPs with non-stationary
covariance kernels are often intractable to use in practice. To overcome these
issues, we propose an online sequential Monte Carlo algorithm to fit mixtures
of GPs that capture non-stationary behavior while allowing for fast,
distributed inference. By formulating hyperparameter optimization as a
multi-armed bandit problem, we accelerate mixing for real time inference. Our
approach empirically improves performance over state-of-the-art methods for
online GP estimation in the context of prediction for simulated non-stationary
data and hospital time series data
Incremental Sparse GP Regression for Continuous-time Trajectory Estimation & Mapping
Recent work on simultaneous trajectory estimation and mapping (STEAM) for
mobile robots has found success by representing the trajectory as a Gaussian
process. Gaussian processes can represent a continuous-time trajectory,
elegantly handle asynchronous and sparse measurements, and allow the robot to
query the trajectory to recover its estimated position at any time of interest.
A major drawback of this approach is that STEAM is formulated as a batch
estimation problem. In this paper we provide the critical extensions necessary
to transform the existing batch algorithm into an extremely efficient
incremental algorithm. In particular, we are able to vastly speed up the
solution time through efficient variable reordering and incremental sparse
updates, which we believe will greatly increase the practicality of Gaussian
process methods for robot mapping and localization. Finally, we demonstrate the
approach and its advantages on both synthetic and real datasets.Comment: 10 pages, 10 figure
NARX-based nonlinear system identification using orthogonal least squares basis hunting
An orthogonal least squares technique for basis hunting (OLS-BH) is proposed to construct sparse radial basis function (RBF) models for NARX-type nonlinear systems. Unlike most of the existing RBF or kernel modelling methods, whichplaces the RBF or kernel centers at the training input data points and use a fixed common variance for all the regressors, the proposed OLS-BH technique tunes the RBF center and diagonal covariance matrix of individual regressor by minimizing the training mean square error. An efficient optimization method isadopted for this basis hunting to select regressors in an orthogonal forward selection procedure. Experimental results obtained using this OLS-BH technique demonstrate that it offers a state-of-the-art method for constructing parsimonious RBF models with excellent generalization performance
- …