1,026 research outputs found
On the Prior and Posterior Distributions Used in Graphical Modelling
Graphical model learning and inference are often performed using Bayesian
techniques. In particular, learning is usually performed in two separate steps.
First, the graph structure is learned from the data; then the parameters of the
model are estimated conditional on that graph structure. While the probability
distributions involved in this second step have been studied in depth, the ones
used in the first step have not been explored in as much detail.
In this paper, we will study the prior and posterior distributions defined
over the space of the graph structures for the purpose of learning the
structure of a graphical model. In particular, we will provide a
characterisation of the behaviour of those distributions as a function of the
possible edges of the graph. We will then use the properties resulting from
this characterisation to define measures of structural variability for both
Bayesian and Markov networks, and we will point out some of their possible
applications.Comment: 28 pages, 6 figure
Learning Bayesian Networks with the bnlearn R Package
bnlearn is an R package which includes several algorithms for learning the
structure of Bayesian networks with either discrete or continuous variables.
Both constraint-based and score-based algorithms are implemented, and can use
the functionality provided by the snow package to improve their performance via
parallel computing. Several network scores and conditional independence
algorithms are available for both the learning algorithms and independent use.
Advanced plotting options are provided by the Rgraphviz package.Comment: 22 pages, 4 picture
Measures of Variability for Bayesian Network Graphical Structures
The structure of a Bayesian network includes a great deal of information
about the probability distribution of the data, which is uniquely identified
given some general distributional assumptions. Therefore it's important to
study its variability, which can be used to compare the performance of
different learning algorithms and to measure the strength of any arbitrary
subset of arcs.
In this paper we will introduce some descriptive statistics and the
corresponding parametric and Monte Carlo tests on the undirected graph
underlying the structure of a Bayesian network, modeled as a multivariate
Bernoulli random variable. A simple numeric example and the comparison of the
performance of some structure learning algorithm on small samples will then
illustrate their use.Comment: 19 pages, 4 figures. arXiv admin note: substantial text overlap with
arXiv:0909.168
An Empirical-Bayes Score for Discrete Bayesian Networks
Bayesian network structure learning is often performed in a Bayesian setting,
by evaluating candidate structures using their posterior probabilities for a
given data set. Score-based algorithms then use those posterior probabilities
as an objective function and return the maximum a posteriori network as the
learned model. For discrete Bayesian networks, the canonical choice for a
posterior score is the Bayesian Dirichlet equivalent uniform (BDeu) marginal
likelihood with a uniform (U) graph prior (Heckerman et al., 1995). Its
favourable theoretical properties descend from assuming a uniform prior both on
the space of the network structures and on the space of the parameters of the
network. In this paper, we revisit the limitations of these assumptions; and we
introduce an alternative set of assumptions and the resulting score: the
Bayesian Dirichlet sparse (BDs) empirical Bayes marginal likelihood with a
marginal uniform (MU) graph prior. We evaluate its performance in an extensive
simulation study, showing that MU+BDs is more accurate than U+BDeu both in
learning the structure of the network and in predicting new observations, while
not being computationally more complex to estimate.Comment: 12 pages, PGM 201
Dirichlet Bayesian Network Scores and the Maximum Relative Entropy Principle
A classic approach for learning Bayesian networks from data is to identify a
maximum a posteriori (MAP) network structure. In the case of discrete Bayesian
networks, MAP networks are selected by maximising one of several possible
Bayesian Dirichlet (BD) scores; the most famous is the Bayesian Dirichlet
equivalent uniform (BDeu) score from Heckerman et al (1995). The key properties
of BDeu arise from its uniform prior over the parameters of each local
distribution in the network, which makes structure learning computationally
efficient; it does not require the elicitation of prior knowledge from experts;
and it satisfies score equivalence.
In this paper we will review the derivation and the properties of BD scores,
and of BDeu in particular, and we will link them to the corresponding entropy
estimates to study them from an information theoretic perspective. To this end,
we will work in the context of the foundational work of Giffin and Caticha
(2007), who showed that Bayesian inference can be framed as a particular case
of the maximum relative entropy principle. We will use this connection to show
that BDeu should not be used for structure learning from sparse data, since it
violates the maximum relative entropy principle; and that it is also
problematic from a more classic Bayesian model selection perspective, because
it produces Bayes factors that are sensitive to the value of its only
hyperparameter. Using a large simulation study, we found in our previous work
(Scutari, 2016) that the Bayesian Dirichlet sparse (BDs) score seems to provide
better accuracy in structure learning; in this paper we further show that BDs
does not suffer from the issues above, and we recommend to use it for sparse
data instead of BDeu. Finally, will show that these issues are in fact
different aspects of the same problem and a consequence of the distributional
assumptions of the prior.Comment: 20 pages, 4 figures; extended version submitted to Behaviormetrik
Learning Bayesian Networks with the bnlearn R Package
bnlearn is an R package (R Development Core Team 2010) which includes several algorithms for learning the structure of Bayesian networks with either discrete or continuous variables. Both constraint-based and score-based algorithms are implemented, and can use the functionality provided by the snow package (Tierney et al. 2008) to improve their performance via parallel computing. Several network scores and conditional independence algorithms are available for both the learning algorithms and independent use. Advanced plotting options are provided by the Rgraphviz package (Gentry et al. 2010).
Decentralized Maximum Likelihood Estimation for Sensor Networks Composed of Nonlinearly Coupled Dynamical Systems
In this paper we propose a decentralized sensor network scheme capable to
reach a globally optimum maximum likelihood (ML) estimate through
self-synchronization of nonlinearly coupled dynamical systems. Each node of the
network is composed of a sensor and a first-order dynamical system initialized
with the local measurements. Nearby nodes interact with each other exchanging
their state value and the final estimate is associated to the state derivative
of each dynamical system. We derive the conditions on the coupling mechanism
guaranteeing that, if the network observes one common phenomenon, each node
converges to the globally optimal ML estimate. We prove that the synchronized
state is globally asymptotically stable if the coupling strength exceeds a
given threshold. Acting on a single parameter, the coupling strength, we show
how, in the case of nonlinear coupling, the network behavior can switch from a
global consensus system to a spatial clustering system. Finally, we show the
effect of the network topology on the scalability properties of the network and
we validate our theoretical findings with simulation results.Comment: Journal paper accepted on IEEE Transactions on Signal Processin
- …