13,770 research outputs found
Partition MCMC for inference on acyclic digraphs
Acyclic digraphs are the underlying representation of Bayesian networks, a
widely used class of probabilistic graphical models. Learning the underlying
graph from data is a way of gaining insights about the structural properties of
a domain. Structure learning forms one of the inference challenges of
statistical graphical models.
MCMC methods, notably structure MCMC, to sample graphs from the posterior
distribution given the data are probably the only viable option for Bayesian
model averaging. Score modularity and restrictions on the number of parents of
each node allow the graphs to be grouped into larger collections, which can be
scored as a whole to improve the chain's convergence. Current examples of
algorithms taking advantage of grouping are the biased order MCMC, which acts
on the alternative space of permuted triangular matrices, and non ergodic edge
reversal moves.
Here we propose a novel algorithm, which employs the underlying combinatorial
structure of DAGs to define a new grouping. As a result convergence is improved
compared to structure MCMC, while still retaining the property of producing an
unbiased sample. Finally the method can be combined with edge reversal moves to
improve the sampler further.Comment: Revised version. 34 pages, 16 figures. R code available at
https://github.com/annlia/partitionMCM
Advances in Learning Bayesian Networks of Bounded Treewidth
This work presents novel algorithms for learning Bayesian network structures
with bounded treewidth. Both exact and approximate methods are developed. The
exact method combines mixed-integer linear programming formulations for
structure learning and treewidth computation. The approximate method consists
in uniformly sampling -trees (maximal graphs of treewidth ), and
subsequently selecting, exactly or approximately, the best structure whose
moral graph is a subgraph of that -tree. Some properties of these methods
are discussed and proven. The approaches are empirically compared to each other
and to a state-of-the-art method for learning bounded treewidth structures on a
collection of public data sets with up to 100 variables. The experiments show
that our exact algorithm outperforms the state of the art, and that the
approximate approach is fairly accurate.Comment: 23 pages, 2 figures, 3 table
Learning Markov networks with context-specific independences
Learning the Markov network structure from data is a problem that has
received considerable attention in machine learning, and in many other
application fields. This work focuses on a particular approach for this purpose
called independence-based learning. Such approach guarantees the learning of
the correct structure efficiently, whenever data is sufficient for representing
the underlying distribution. However, an important issue of such approach is
that the learned structures are encoded in an undirected graph. The problem
with graphs is that they cannot encode some types of independence relations,
such as the context-specific independences. They are a particular case of
conditional independences that is true only for a certain assignment of its
conditioning set, in contrast to conditional independences that must hold for
all its assignments. In this work we present CSPC, an independence-based
algorithm for learning structures that encode context-specific independences,
and encoding them in a log-linear model, instead of a graph. The central idea
of CSPC is combining the theoretical guarantees provided by the
independence-based approach with the benefits of representing complex
structures by using features in a log-linear model. We present experiments in a
synthetic case, showing that CSPC is more accurate than the state-of-the-art IB
algorithms when the underlying distribution contains CSIs.Comment: 8 pages, 6 figure
- …