6 research outputs found

    Dynamic Inference in Probabilistic Graphical Models

    Get PDF
    Probabilistic graphical models, such as Markov random fields (MRFs), are useful for describing high-dimensional distributions in terms of local dependence structures. The probabilistic inference is a fundamental problem related to graphical models, and sampling is a main approach for the problem. In this paper, we study probabilistic inference problems when the graphical model itself is changing dynamically with time. Such dynamic inference problems arise naturally in today's application, e.g.~multivariate time-series data analysis and practical learning procedures. We give a dynamic algorithm for sampling-based probabilistic inferences in MRFs, where each dynamic update can change the underlying graph and all parameters of the MRF simultaneously, as long as the total amount of changes is bounded. More precisely, suppose that the MRF has nn variables and polylogarithmic-bounded maximum degree, and N(n)N(n) independent samples are sufficient for the inference for a polynomial function N(⋅)N(\cdot). Our algorithm dynamically maintains an answer to the inference problem using O~(nN(n))\widetilde{O}(n N(n)) space cost, and O~(N(n)+n)\widetilde{O}(N(n) + n) incremental time cost upon each update to the MRF, as long as the well-known Dobrushin-Shlosman condition is satisfied by the MRFs. Compared to the static case, which requires Ω(nN(n))\Omega(n N(n)) time cost for redrawing all N(n)N(n) samples whenever the MRF changes, our dynamic algorithm gives a Ω~(min⁥{n,N(n)})\widetilde\Omega(\min\{n, N(n)\})-factor speedup. Our approach relies on a novel dynamic sampling technique, which transforms local Markov chains (a.k.a. single-site dynamics) to dynamic sampling algorithms, and an "algorithmic Lipschitz" condition that we establish for sampling from graphical models, namely, when the MRF changes by a small difference, samples can be modified to reflect the new distribution, with cost proportional to the difference on MRF

    Fundamentals of Partial Rejection Sampling

    Full text link
    Partial Rejection Sampling is an algorithmic approach to obtaining a perfect sample from a specified distribution. The objects to be sampled are assumed to be represented by a number of random variables. In contrast to classical rejection sampling, in which all variables are resampled until a feasible solution is found, partial rejection sampling aims at greater efficiency by resampling only a subset of variables that `go wrong'. Partial rejection sampling is closely related to Moser and Tardos' algorithmic version of the Lov\'asz Local Lemma, but with the additional requirement that a specified output distribution should be met. This article provides a largely self-contained account of the basic form of the algorithm and its analysis

    Perfect sampling from spatial mixing

    Get PDF
    We introduce a new perfect sampling technique that can be applied to general Gibbs distributions and runs in linear time if the correlation decays faster than the neighborhood growth. In particular, in graphs with subexponential neighborhood growth like [Formula: see text] , our algorithm achieves linear running time as long as Gibbs sampling is rapidly mixing. As concrete applications, we obtain the currently best perfect samplers for colorings and for monomer‐dimer models in such graphs
    corecore