11 research outputs found
Fast MCMC sampling for Markov jump processes and continuous time Bayesian networks
Markov jump processes and continuous time Bayesian networks are important
classes of continuous time dynamical systems. In this paper, we tackle the
problem of inferring unobserved paths in these models by introducing a fast
auxiliary variable Gibbs sampler. Our approach is based on the idea of
uniformization, and sets up a Markov chain over paths by sampling a finite set
of virtual jump times and then running a standard hidden Markov model forward
filtering-backward sampling algorithm over states at the set of extant and
virtual jump times. We demonstrate significant computational benefits over a
state-of-the-art Gibbs sampler on a number of continuous time Bayesian
networks
Fast MCMC sampling for Markov jump processes and extensions
Markov jump processes (or continuous-time Markov chains) are a simple and
important class of continuous-time dynamical systems. In this paper, we tackle
the problem of simulating from the posterior distribution over paths in these
models, given partial and noisy observations. Our approach is an auxiliary
variable Gibbs sampler, and is based on the idea of uniformization. This sets
up a Markov chain over paths by alternately sampling a finite set of virtual
jump times given the current path and then sampling a new path given the set of
extant and virtual jump times using a standard hidden Markov model forward
filtering-backward sampling algorithm. Our method is exact and does not involve
approximations like time-discretization. We demonstrate how our sampler extends
naturally to MJP-based models like Markov-modulated Poisson processes and
continuous-time Bayesian networks and show significant computational benefits
over state-of-the-art MCMC samplers for these models.Comment: Accepted at the Journal of Machine Learning Research (JMLR