1,762 research outputs found

    Cutset Sampling for Bayesian Networks

    Full text link
    The paper presents a new sampling methodology for Bayesian networks that samples only a subset of variables and applies exact inference to the rest. Cutset sampling is a network structure-exploiting application of the Rao-Blackwellisation principle to sampling in Bayesian networks. It improves convergence by exploiting memory-based inference algorithms. It can also be viewed as an anytime approximation of the exact cutset-conditioning algorithm developed by Pearl. Cutset sampling can be implemented efficiently when the sampled variables constitute a loop-cutset of the Bayesian network and, more generally, when the induced width of the networks graph conditioned on the observed sampled variables is bounded by a constant w. We demonstrate empirically the benefit of this scheme on a range of benchmarks

    Mean Field Theory for Sigmoid Belief Networks

    Get PDF
    We develop a mean field theory for sigmoid belief networks based on ideas from statistical mechanics. Our mean field theory provides a tractable approximation to the true probability distribution in these networks; it also yields a lower bound on the likelihood of evidence. We demonstrate the utility of this framework on a benchmark problem in statistical pattern recognition---the classification of handwritten digits.Comment: See http://www.jair.org/ for any accompanying file

    Parallel Adaptive Collapsed Gibbs Sampling

    Get PDF
    Rao-Blackwellisation is a technique that provably improves the performance of Gibbs sampling by summing-out variables from the PGM. However, collapsing variables is computationally expensive, since it changes the PGM structure introducing factors whose size is dependent upon the Markov blanket of the variable. Therefore, collapsing out several variables jointly is typically intractable in arbitrary PGM structures. This thesis proposes an adaptive approach for Rao-Blackwellisation, where additional parallel Markov chains are defined over different collapsed PGM structures. The collapsed variables are chosen based on their convergence diagnostics. Adding chains requires re-burn-in the chain, thus wasting samples. To address this, new chains are initialized from a mean field approximation for the distribution, that improves over time, thus reducing the burn-in period. The experiments on several UAI benchmarks shows that this approach is more accurate than state-of-the-art inference systems such as Merlin which have previously won the UAI inference challenge
    • …
    corecore