1,622 research outputs found

    Sequential sampling of junction trees for decomposable graphs

    Full text link
    The junction-tree representation provides an attractive structural property for organizing a decomposable graph. In this study, we present a novel stochastic algorithm, which we call the junction-tree expander, for sequential sampling of junction trees for decomposable graphs. We show that recursive application of the junction-tree expander, expanding incrementally the underlying graph with one vertex at a time, has full support on the space of junction trees with any given number of underlying vertices. A direct application of our suggested algorithm is demonstrated in a sequential Monte Carlo setting designed for sampling from distributions on spaces of decomposable graphs, where the junction-tree expander can be effectively employed as proposal kernel; see the companion paper Olsson et al. 2019 [16]. A numerical study illustrates the utility of our approach by two examples: in the first one, how the junction-tree expander can be incorporated successfully into a particle Gibbs sampler for Bayesian structure learning in decomposable graphical models; in the second one, we provide an unbiased estimator of the number of decomposable graphs for a given number of vertices. All the methods proposed in the paper are implemented in the Python library trilearn.Comment: 31 pages, 7 figure

    A closed-form approach to Bayesian inference in tree-structured graphical models

    Full text link
    We consider the inference of the structure of an undirected graphical model in an exact Bayesian framework. More specifically we aim at achieving the inference with close-form posteriors, avoiding any sampling step. This task would be intractable without any restriction on the considered graphs, so we limit our exploration to mixtures of spanning trees. We consider the inference of the structure of an undirected graphical model in a Bayesian framework. To avoid convergence issues and highly demanding Monte Carlo sampling, we focus on exact inference. More specifically we aim at achieving the inference with close-form posteriors, avoiding any sampling step. To this aim, we restrict the set of considered graphs to mixtures of spanning trees. We investigate under which conditions on the priors - on both tree structures and parameters - exact Bayesian inference can be achieved. Under these conditions, we derive a fast an exact algorithm to compute the posterior probability for an edge to belong to {the tree model} using an algebraic result called the Matrix-Tree theorem. We show that the assumption we have made does not prevent our approach to perform well on synthetic and flow cytometry data

    A conjugate prior for discrete hierarchical log-linear models

    Full text link
    In Bayesian analysis of multi-way contingency tables, the selection of a prior distribution for either the log-linear parameters or the cell probabilities parameters is a major challenge. In this paper, we define a flexible family of conjugate priors for the wide class of discrete hierarchical log-linear models, which includes the class of graphical models. These priors are defined as the Diaconis--Ylvisaker conjugate priors on the log-linear parameters subject to "baseline constraints" under multinomial sampling. We also derive the induced prior on the cell probabilities and show that the induced prior is a generalization of the hyper Dirichlet prior. We show that this prior has several desirable properties and illustrate its usefulness by identifying the most probable decomposable, graphical and hierarchical log-linear models for a six-way contingency table.Comment: Published in at http://dx.doi.org/10.1214/08-AOS669 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore