9 research outputs found

    Mean field analysis for Continuous Time Bayesian Networks

    Get PDF
    In this paper we investigate the use of the mean field technique to analyze Continuous Time Bayesian Networks (CTBN). They model continuous time evolving variables with exponentially distributed transition rates depending on the parent variables in the graph. CTBN inference consists of computing the probability distribution of a subset of variables, conditioned by the observation of other variables' values (evidence). The computation of exact results is often unfeasible due to the complexity of the model. For such reason, the possibility to perform the CTBN inference through the equivalent Generalized Stochastic Petri Net (GSPN) was investigated in the past. In this paper instead, we explore the use of mean field approximation and apply it to a well-known epidemic case study. The CTBN model is converted in both a GSPN and in a mean field based model. The example is then analyzed with both solutions, in order to evaluate the accuracy of the mean field approximation for the computation of the posterior probability of the CTBN given an evidence. A summary of the lessons learned during this preliminary attempt concludes the paper

    Fast MCMC sampling for Markov jump processes and continuous time Bayesian networks

    Full text link
    Markov jump processes and continuous time Bayesian networks are important classes of continuous time dynamical systems. In this paper, we tackle the problem of inferring unobserved paths in these models by introducing a fast auxiliary variable Gibbs sampler. Our approach is based on the idea of uniformization, and sets up a Markov chain over paths by sampling a finite set of virtual jump times and then running a standard hidden Markov model forward filtering-backward sampling algorithm over states at the set of extant and virtual jump times. We demonstrate significant computational benefits over a state-of-the-art Gibbs sampler on a number of continuous time Bayesian networks

    A GSPN semantics for Continuous Time Bayesian Networks with Immediate Nodes

    Get PDF
    In this report we present an extension to Continuous Time Bayesian Networks (CTBN) called Generalized Continuous Time Bayesian Networks (GCTBN). The formalism allows one to model, in addition to continuous time delayed variables (with exponentially distributed transition rates), also non delayed or "immediate" variables, which act as standard chance nodes in a Bayesian Network. This allows the modeling of processes having both a continuous-time temporal component and an immediate (i.e. non-delayed) component capturing the logical/probabilistic interactions among the model\u2019s variables. The usefulness of this kind of model is discussed through an example concerning the reliability of a simple component-based system. A semantic model of GCTBNs, based on the formalism of Generalized Stochastic Petri Nets (GSPN) is outlined, whose purpose is twofold: to provide a well-de\ufb01ned semantics for GCTBNs in terms of the underlying stochastic process, and to provide an actual mean to perform inference (both prediction and smoothing) on GCTBNs. The example case study is then used, in order to highlight the exploitation of GSPN analysis for posterior probability computation on the GCTBN model

    Sampling for Approximate Inference in Continuous Time Bayesian Networks

    No full text
    We first present a sampling algorithm for continuous time Bayesian networks based on importance sampling. We then extend it to continuous-time particle filtering and smoothing algorithms. The three algorithms can estimate the expectation of any function of a trajectory, conditioned on any evidence set constraining the values of subsets of the variables over subsets of the timeline. We present experimental results on their accuracies and time efficiencies, and compare them to expectation propagation.

    Markov chain Monte Carlo for continuous-time discrete-state systems

    Get PDF
    A variety of phenomena are best described using dynamical models which operate on a discrete state space and in continuous time. Examples include Markov (and semi-Markov) jump processes, continuous-time Bayesian networks, renewal processes and other point processes. These continuous-time, discrete-state models are ideal building blocks for Bayesian models in fields such as systems biology, genetics, chemistry, computing networks, human-computer interactions etc. However, a challenge towards their more widespread use is the computational burden of posterior inference; this typically involves approximations like time discretization and can be computationally intensive. In this thesis, we describe a new class of Markov chain Monte Carlo methods that allow efficient computation while still being exact. The core idea is an auxiliary variable Gibbs sampler that alternately resamples a random discretization of time given the state-trajectory of the system, and then samples a new trajectory given this discretization. We introduce this idea by relating it to a classical idea called uniformization, and use it to develop algorithms that outperform the state-of-the-art for models based on the Markov jump process. We then extend the scope of these samplers to a wider class of models such as nonstationary renewal processes, and semi-Markov jump processes. By developing a more general framework beyond uniformization, we remedy various limitations of the original algorithms, allowing us to develop MCMC samplers for systems with infinite state spaces, unbounded rates, as well as systems indexed by more general continuous spaces than time
    corecore