9,373 research outputs found
Sequential Gaussian Processes for Online Learning of Nonstationary Functions
Many machine learning problems can be framed in the context of estimating
functions, and often these are time-dependent functions that are estimated in
real-time as observations arrive. Gaussian processes (GPs) are an attractive
choice for modeling real-valued nonlinear functions due to their flexibility
and uncertainty quantification. However, the typical GP regression model
suffers from several drawbacks: i) Conventional GP inference scales
with respect to the number of observations; ii) updating a GP model
sequentially is not trivial; and iii) covariance kernels often enforce
stationarity constraints on the function, while GPs with non-stationary
covariance kernels are often intractable to use in practice. To overcome these
issues, we propose an online sequential Monte Carlo algorithm to fit mixtures
of GPs that capture non-stationary behavior while allowing for fast,
distributed inference. By formulating hyperparameter optimization as a
multi-armed bandit problem, we accelerate mixing for real time inference. Our
approach empirically improves performance over state-of-the-art methods for
online GP estimation in the context of prediction for simulated non-stationary
data and hospital time series data
A Globally Convergent Gradient-based Bilevel Hyperparameter Optimization Method
Hyperparameter optimization in machine learning is often achieved using naive
techniques that only lead to an approximate set of hyperparameters. Although
techniques such as Bayesian optimization perform an intelligent search on a
given domain of hyperparameters, it does not guarantee an optimal solution. A
major drawback of most of these approaches is an exponential increase of their
search domain with number of hyperparameters, increasing the computational cost
and making the approaches slow. The hyperparameter optimization problem is
inherently a bilevel optimization task, and some studies have attempted bilevel
solution methodologies for solving this problem. However, these studies assume
a unique set of model weights that minimize the training loss, which is
generally violated by deep learning architectures. This paper discusses a
gradient-based bilevel method addressing these drawbacks for solving the
hyperparameter optimization problem. The proposed method can handle continuous
hyperparameters for which we have chosen the regularization hyperparameter in
our experiments. The method guarantees convergence to the set of optimal
hyperparameters that this study has theoretically proven. The idea is based on
approximating the lower-level optimal value function using Gaussian process
regression. As a result, the bilevel problem is reduced to a single level
constrained optimization task that is solved using the augmented Lagrangian
method. We have performed an extensive computational study on the MNIST and
CIFAR-10 datasets on multi-layer perceptron and LeNet architectures that
confirms the efficiency of the proposed method. A comparative study against
grid search, random search, Bayesian optimization, and HyberBand method on
various hyperparameter problems shows that the proposed algorithm converges
with lower computation and leads to models that generalize better on the
testing set
- …