1,520 research outputs found
Lifted Relax, Compensate and then Recover: From Approximate to Exact Lifted Probabilistic Inference
We propose an approach to lifted approximate inference for first-order
probabilistic models, such as Markov logic networks. It is based on performing
exact lifted inference in a simplified first-order model, which is found by
relaxing first-order constraints, and then compensating for the relaxation.
These simplified models can be incrementally improved by carefully recovering
constraints that have been relaxed, also at the first-order level. This leads
to a spectrum of approximations, with lifted belief propagation on one end, and
exact lifted inference on the other. We discuss how relaxation, compensation,
and recovery can be performed, all at the firstorder level, and show
empirically that our approach substantially improves on the approximations of
both propositional solvers and lifted belief propagation.Comment: Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012
On the Complexity and Approximation of Binary Evidence in Lifted Inference
Lifted inference algorithms exploit symmetries in probabilistic models to
speed up inference. They show impressive performance when calculating
unconditional probabilities in relational models, but often resort to
non-lifted inference when computing conditional probabilities. The reason is
that conditioning on evidence breaks many of the model's symmetries, which can
preempt standard lifting techniques. Recent theoretical results show, for
example, that conditioning on evidence which corresponds to binary relations is
#P-hard, suggesting that no lifting is to be expected in the worst case. In
this paper, we balance this negative result by identifying the Boolean rank of
the evidence as a key parameter for characterizing the complexity of
conditioning in lifted inference. In particular, we show that conditioning on
binary evidence with bounded Boolean rank is efficient. This opens up the
possibility of approximating evidence by a low-rank Boolean matrix
factorization, which we investigate both theoretically and empirically.Comment: To appear in Advances in Neural Information Processing Systems 26
(NIPS), Lake Tahoe, USA, December 201
Tractability through Exchangeability: A New Perspective on Efficient Probabilistic Inference
Exchangeability is a central notion in statistics and probability theory. The
assumption that an infinite sequence of data points is exchangeable is at the
core of Bayesian statistics. However, finite exchangeability as a statistical
property that renders probabilistic inference tractable is less
well-understood. We develop a theory of finite exchangeability and its relation
to tractable probabilistic inference. The theory is complementary to that of
independence and conditional independence. We show that tractable inference in
probabilistic models with high treewidth and millions of variables can be
understood using the notion of finite (partial) exchangeability. We also show
that existing lifted inference algorithms implicitly utilize a combination of
conditional independence and partial exchangeability.Comment: In Proceedings of the 28th AAAI Conference on Artificial Intelligenc
Lower Complexity Bounds for Lifted Inference
One of the big challenges in the development of probabilistic relational (or
probabilistic logical) modeling and learning frameworks is the design of
inference techniques that operate on the level of the abstract model
representation language, rather than on the level of ground, propositional
instances of the model. Numerous approaches for such "lifted inference"
techniques have been proposed. While it has been demonstrated that these
techniques will lead to significantly more efficient inference on some specific
models, there are only very recent and still quite restricted results that show
the feasibility of lifted inference on certain syntactically defined classes of
models. Lower complexity bounds that imply some limitations for the feasibility
of lifted inference on more expressive model classes were established early on
in (Jaeger 2000). However, it is not immediate that these results also apply to
the type of modeling languages that currently receive the most attention, i.e.,
weighted, quantifier-free formulas. In this paper we extend these earlier
results, and show that under the assumption that NETIME =/= ETIME, there is no
polynomial lifted inference algorithm for knowledge bases of weighted,
quantifier- and function-free formulas. Further strengthening earlier results,
this is also shown to hold for approximate inference, and for knowledge bases
not containing the equality predicate.Comment: To appear in Theory and Practice of Logic Programming (TPLP
Understanding the Complexity of Lifted Inference and Asymmetric Weighted Model Counting
In this paper we study lifted inference for the Weighted First-Order Model
Counting problem (WFOMC), which counts the assignments that satisfy a given
sentence in first-order logic (FOL); it has applications in Statistical
Relational Learning (SRL) and Probabilistic Databases (PDB). We present several
results. First, we describe a lifted inference algorithm that generalizes prior
approaches in SRL and PDB. Second, we provide a novel dichotomy result for a
non-trivial fragment of FO CNF sentences, showing that for each sentence the
WFOMC problem is either in PTIME or #P-hard in the size of the input domain; we
prove that, in the first case our algorithm solves the WFOMC problem in PTIME,
and in the second case it fails. Third, we present several properties of the
algorithm. Finally, we discuss limitations of lifted inference for symmetric
probabilistic databases (where the weights of ground literals depend only on
the relation name, and not on the constants of the domain), and prove the
impossibility of a dichotomy result for the complexity of probabilistic
inference for the entire language FOL
- …