4,672 research outputs found
Partition MCMC for inference on acyclic digraphs
Acyclic digraphs are the underlying representation of Bayesian networks, a
widely used class of probabilistic graphical models. Learning the underlying
graph from data is a way of gaining insights about the structural properties of
a domain. Structure learning forms one of the inference challenges of
statistical graphical models.
MCMC methods, notably structure MCMC, to sample graphs from the posterior
distribution given the data are probably the only viable option for Bayesian
model averaging. Score modularity and restrictions on the number of parents of
each node allow the graphs to be grouped into larger collections, which can be
scored as a whole to improve the chain's convergence. Current examples of
algorithms taking advantage of grouping are the biased order MCMC, which acts
on the alternative space of permuted triangular matrices, and non ergodic edge
reversal moves.
Here we propose a novel algorithm, which employs the underlying combinatorial
structure of DAGs to define a new grouping. As a result convergence is improved
compared to structure MCMC, while still retaining the property of producing an
unbiased sample. Finally the method can be combined with edge reversal moves to
improve the sampler further.Comment: Revised version. 34 pages, 16 figures. R code available at
https://github.com/annlia/partitionMCM
Learning All Credible Bayesian Network Structures for Model Averaging
A Bayesian network is a widely used probabilistic graphical model with
applications in knowledge discovery and prediction. Learning a Bayesian network
(BN) from data can be cast as an optimization problem using the well-known
score-and-search approach. However, selecting a single model (i.e., the best
scoring BN) can be misleading or may not achieve the best possible accuracy. An
alternative to committing to a single model is to perform some form of Bayesian
or frequentist model averaging, where the space of possible BNs is sampled or
enumerated in some fashion. Unfortunately, existing approaches for model
averaging either severely restrict the structure of the Bayesian network or
have only been shown to scale to networks with fewer than 30 random variables.
In this paper, we propose a novel approach to model averaging inspired by
performance guarantees in approximation algorithms. Our approach has two
primary advantages. First, our approach only considers credible models in that
they are optimal or near-optimal in score. Second, our approach is more
efficient and scales to significantly larger Bayesian networks than existing
approaches.Comment: under review by JMLR. arXiv admin note: substantial text overlap with
arXiv:1811.0503
Consensus and meta-analysis regulatory networks for combining multiple microarray gene expression datasets
Microarray data is a key source of experimental data for modelling gene regulatory interactions from expression levels. With the rapid increase of publicly available microarray data comes the opportunity to produce regulatory network models based on multiple datasets. Such models are potentially more robust with greater confidence, and place less reliance on a single dataset. However, combining datasets directly can be difficult as experiments are often conducted on different microarray platforms, and in different laboratories leading to inherent biases in the data that are not always removed through pre-processing such as normalisation. In this paper we compare two frameworks for combining microarray datasets to model regulatory networks: pre- and post-learning aggregation. In pre-learning approaches, such as using simple scale-normalisation prior to the concatenation of datasets, a model is learnt from a combined dataset, whilst in post-learning aggregation individual models are learnt from each dataset and the models are combined. We present two novel approaches for post-learning aggregation, each based on aggregating high-level features of Bayesian network models that have been generated from different microarray expression datasets. Meta-analysis Bayesian networks are based on combining statistical confidences attached to network edges whilst Consensus Bayesian networks identify consistent network features across all datasets. We apply both approaches to multiple datasets from synthetic and real (Escherichia coli and yeast) networks and demonstrate that both methods can improve on networks learnt from a single dataset or an aggregated dataset formed using a standard scale-normalisation
- …