24,922 research outputs found
On the construction of probabilistic Newton-type algorithms
It has recently been shown that many of the existing quasi-Newton algorithms
can be formulated as learning algorithms, capable of learning local models of
the cost functions. Importantly, this understanding allows us to safely start
assembling probabilistic Newton-type algorithms, applicable in situations where
we only have access to noisy observations of the cost function and its
derivatives. This is where our interest lies.
We make contributions to the use of the non-parametric and probabilistic
Gaussian process models in solving these stochastic optimisation problems.
Specifically, we present a new algorithm that unites these approximations
together with recent probabilistic line search routines to deliver a
probabilistic quasi-Newton approach.
We also show that the probabilistic optimisation algorithms deliver promising
results on challenging nonlinear system identification problems where the very
nature of the problem is such that we can only access the cost function and its
derivative via noisy observations, since there are no closed-form expressions
available
Online semi-parametric learning for inverse dynamics modeling
This paper presents a semi-parametric algorithm for online learning of a
robot inverse dynamics model. It combines the strength of the parametric and
non-parametric modeling. The former exploits the rigid body dynamics equa-
tion, while the latter exploits a suitable kernel function. We provide an
extensive comparison with other methods from the literature using real data
from the iCub humanoid robot. In doing so we also compare two different
techniques, namely cross validation and marginal likelihood optimization, for
estimating the hyperparameters of the kernel function
Bayesian optimisation for likelihood-free cosmological inference
Many cosmological models have only a finite number of parameters of interest,
but a very expensive data-generating process and an intractable likelihood
function. We address the problem of performing likelihood-free Bayesian
inference from such black-box simulation-based models, under the constraint of
a very limited simulation budget (typically a few thousand). To do so, we adopt
an approach based on the likelihood of an alternative parametric model.
Conventional approaches to approximate Bayesian computation such as
likelihood-free rejection sampling are impractical for the considered problem,
due to the lack of knowledge about how the parameters affect the discrepancy
between observed and simulated data. As a response, we make use of a strategy
previously developed in the machine learning literature (Bayesian optimisation
for likelihood-free inference, BOLFI), which combines Gaussian process
regression of the discrepancy to build a surrogate surface with Bayesian
optimisation to actively acquire training data. We extend the method by
deriving an acquisition function tailored for the purpose of minimising the
expected uncertainty in the approximate posterior density, in the parametric
approach. The resulting algorithm is applied to the problems of summarising
Gaussian signals and inferring cosmological parameters from the Joint
Lightcurve Analysis supernovae data. We show that the number of required
simulations is reduced by several orders of magnitude, and that the proposed
acquisition function produces more accurate posterior approximations, as
compared to common strategies.Comment: 16+9 pages, 12 figures. Matches PRD published version after minor
modification
AReS and MaRS - Adversarial and MMD-Minimizing Regression for SDEs
Stochastic differential equations are an important modeling class in many
disciplines. Consequently, there exist many methods relying on various
discretization and numerical integration schemes. In this paper, we propose a
novel, probabilistic model for estimating the drift and diffusion given noisy
observations of the underlying stochastic system. Using state-of-the-art
adversarial and moment matching inference techniques, we avoid the
discretization schemes of classical approaches. This leads to significant
improvements in parameter accuracy and robustness given random initial guesses.
On four established benchmark systems, we compare the performance of our
algorithms to state-of-the-art solutions based on extended Kalman filtering and
Gaussian processes.Comment: Published at the Thirty-sixth International Conference on Machine
Learning (ICML 2019
- …