PhD ThesisStochastic process models such as stochastic differential equations (SDEs), state-space models
(SSMs), Gaussian processes (GPs) and latent force models (LFMs), provide a powerful
collection of modelling techniques to better our understanding of many physical systems. In
treating these models within the Bayesian paradigm, we further yield a rich expression of
our uncertainty, and gain the ability to incorporate our prior beliefs. However, performing
Bayesian posterior inference is not without significant challenge. Exact likelihood calculations
can often be intractable, take an infeasibly long time to compute, or be challenging to
approximate in the presence of missing data. Therefore, designing new approaches to perform
Bayesian inference for this family of stochastic process models is of great scientific interest.
Variational inference (VI) has had great success is scaling Bayesian inference across a range of
problem domains. Historically, however, its successful application to stochastic process models
has been limited. The reason is two-fold. Firstly, mini-batch likelihood estimation techniques
often employed by VI have only previously been applicable to models of independent data.
Secondly, approximating distributions have often imposed unrealistic assumptions over the
posterior. Fortunately, however, recent advances in generative modelling have provided
the framework with which to solve these problems. Here, artificial neural networks can be
used to flexibly construct powerful density approximations, which are then amenable to fast
computation using modern GPUs. This is otherwise known as black-box-variational inference.
This thesis presents a collection of black-box variational methods for the purposes of approximate inference in SDEs, SSMs, GPs and LFMs. Here we leverage artificial neural networks
to parametrise our approximate posterior distributions, permitting accurate inference in a
short time. We begin by presenting two methods for SDE inference. The first, inspired by
the Euler-Maruyama discretisation, approximates the discrete-time solution to a conditioned
diffusion process using recurrent neural networks. The second, which extends the first, eschews a discretisation scheme and approximates the continuous-time process directly. Finally
we consider the use of normalising flows for inference using SSMs (including discrete-time
SDEs), GPs and LFMs. Here we design a generative architecture that permits mini-batch
optimization, allowing approximate inference for big dat
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.