In applied fields where the speed of inference and model flexibility are
crucial, the use of Bayesian inference for models with a stochastic process as
their prior, e.g. Gaussian processes (GPs) is ubiquitous. Recent literature has
demonstrated that the computational bottleneck caused by GP priors or their
finite realizations can be encoded using deep generative models such as
variational autoencoders (VAEs), and the learned generators can then be used
instead of the original priors during Markov chain Monte Carlo (MCMC) inference
in a drop-in manner. While this approach enables fast and highly efficient
inference, it loses information about the stochastic process hyperparameters,
and, as a consequence, makes inference over hyperparameters impossible and the
learned priors indistinct. We propose to resolve the aforementioned issue and
disentangle the learned priors by conditioning the VAE on stochastic process
hyperparameters. This way, the hyperparameters are encoded alongside GP
realisations and can be explicitly estimated at the inference stage. We believe
that the new method, termed PriorCVAE, will be a useful tool among approximate
inference approaches and has the potential to have a large impact on spatial
and spatiotemporal inference in crucial real-life applications. Code showcasing
the PriorCVAE technique can be accessed via the following link:
https://github.com/elizavetasemenova/PriorCVA