92,236 research outputs found

    Complexity of Inference in Graphical Models

    Get PDF
    Graphical models provide a convenient representation for a broad class of probability distributions. Due to their powerful and sophisticated modeling capabilities, such models have found numerous applications in machine learning and other areas. In this paper we consider the complexity of commonly encountered tasks involving graphical models such as the computation of the mode of a posterior probability distribution (i.e., MAP estimation), and the computation of marginal probabilities or the partition function. It is well-known that such inference problems are hard in the worst case, but are tractable for models with bounded treewidth. We ask whether treewidth is the only structural criterion of the underlying graph that enables tractable inference. In other words, is there some class of structures with unbounded treewidth in which inference is tractable? Subject to a combinatorial hypothesis due to Robertson, Seymour, and Thomas (1994), we show that low treewidth is indeed the only structural restriction that can ensure tractability. More precisely we show that for every growing family of graphs indexed by tree-width, there exists a choice of potential functions such that the corresponding inference problem is intractable. Thus even for the "best case" graph structures of high treewidth, there is no polynomial-time inference algorithm. Our analysis employs various concepts from complexity theory and graph theory, with graph minors playing a prominent role

    Block Belief Propagation for Parameter Learning in Markov Random Fields

    Full text link
    Traditional learning methods for training Markov random fields require doing inference over all variables to compute the likelihood gradient. The iteration complexity for those methods therefore scales with the size of the graphical models. In this paper, we propose \emph{block belief propagation learning} (BBPL), which uses block-coordinate updates of approximate marginals to compute approximate gradients, removing the need to compute inference on the entire graphical model. Thus, the iteration complexity of BBPL does not scale with the size of the graphs. We prove that the method converges to the same solution as that obtained by using full inference per iteration, despite these approximations, and we empirically demonstrate its scalability improvements over standard training methods.Comment: Accepted to AAAI 201

    Recent advances in imprecise-probabilistic graphical models

    Get PDF
    We summarise and provide pointers to recent advances in inference and identification for specific types of probabilistic graphical models using imprecise probabilities. Robust inferences can be made in so-called credal networks when the local models attached to their nodes are imprecisely specified as conditional lower previsions, by using exact algorithms whose complexity is comparable to that for the precise-probabilistic counterparts
    • …
    corecore