47,902 research outputs found
An Asymptotic Analysis of Probabilistic Logic Programming, with Implications for Expressing Projective Families of Distributions
Over the last years, there has been increasing research on the scaling
behaviour of statistical relational representations with the size of the
domain, and on the connections between domain size dependence and lifted
inference. In particular, the asymptotic behaviour of statistical relational
representations has come under scrutiny, and projectivity was isolated as the
strongest form of domain size independence. In this contribution we show that
every probabilistic logic program under the distribution semantics is
asymptotically equivalent to a probabilistic logic program consisting only of
range-restricted clauses over probabilistic facts. To facilitate the
application of classical results from finite model theory, we introduce the
abstract distribution semantics, defined as an arbitrary logical theory over
probabilistic facts to bridge the gap to the distribution semantics underlying
probabilistic logic programming. In this representation, range-restricted logic
programs correspond to quantifier-free theories, making asymptotic quantifier
results avilable for use. We can conclude that every probabilistic logic
program inducing a projective family of distributions is in fact captured by
this class, and we can infer interesting consequences for the expressivity of
probabilistic logic programs as well as for the asymptotic behaviour of
probabilistic rules.Comment: 14 page
Labeled Directed Acyclic Graphs: a generalization of context-specific independence in directed graphical models
We introduce a novel class of labeled directed acyclic graph (LDAG) models
for finite sets of discrete variables. LDAGs generalize earlier proposals for
allowing local structures in the conditional probability distribution of a
node, such that unrestricted label sets determine which edges can be deleted
from the underlying directed acyclic graph (DAG) for a given context. Several
properties of these models are derived, including a generalization of the
concept of Markov equivalence classes. Efficient Bayesian learning of LDAGs is
enabled by introducing an LDAG-based factorization of the Dirichlet prior for
the model parameters, such that the marginal likelihood can be calculated
analytically. In addition, we develop a novel prior distribution for the model
structures that can appropriately penalize a model for its labeling complexity.
A non-reversible Markov chain Monte Carlo algorithm combined with a greedy hill
climbing approach is used for illustrating the useful properties of LDAG models
for both real and synthetic data sets.Comment: 26 pages, 17 figure
- …