5,549 research outputs found
Lower Complexity Bounds for Lifted Inference
One of the big challenges in the development of probabilistic relational (or
probabilistic logical) modeling and learning frameworks is the design of
inference techniques that operate on the level of the abstract model
representation language, rather than on the level of ground, propositional
instances of the model. Numerous approaches for such "lifted inference"
techniques have been proposed. While it has been demonstrated that these
techniques will lead to significantly more efficient inference on some specific
models, there are only very recent and still quite restricted results that show
the feasibility of lifted inference on certain syntactically defined classes of
models. Lower complexity bounds that imply some limitations for the feasibility
of lifted inference on more expressive model classes were established early on
in (Jaeger 2000). However, it is not immediate that these results also apply to
the type of modeling languages that currently receive the most attention, i.e.,
weighted, quantifier-free formulas. In this paper we extend these earlier
results, and show that under the assumption that NETIME =/= ETIME, there is no
polynomial lifted inference algorithm for knowledge bases of weighted,
quantifier- and function-free formulas. Further strengthening earlier results,
this is also shown to hold for approximate inference, and for knowledge bases
not containing the equality predicate.Comment: To appear in Theory and Practice of Logic Programming (TPLP
Understanding the Complexity of Lifted Inference and Asymmetric Weighted Model Counting
In this paper we study lifted inference for the Weighted First-Order Model
Counting problem (WFOMC), which counts the assignments that satisfy a given
sentence in first-order logic (FOL); it has applications in Statistical
Relational Learning (SRL) and Probabilistic Databases (PDB). We present several
results. First, we describe a lifted inference algorithm that generalizes prior
approaches in SRL and PDB. Second, we provide a novel dichotomy result for a
non-trivial fragment of FO CNF sentences, showing that for each sentence the
WFOMC problem is either in PTIME or #P-hard in the size of the input domain; we
prove that, in the first case our algorithm solves the WFOMC problem in PTIME,
and in the second case it fails. Third, we present several properties of the
algorithm. Finally, we discuss limitations of lifted inference for symmetric
probabilistic databases (where the weights of ground literals depend only on
the relation name, and not on the constants of the domain), and prove the
impossibility of a dichotomy result for the complexity of probabilistic
inference for the entire language FOL
Synthesizing and executing plans in Knowledge and Action Bases
We study plan synthesis for a variant of Knowledge and Action Bases (KABs). KABs have been recently introduced as a rich, dynamic framework where states are full-fledged description logic (DL) knowledge bases (KBs) whose extensional part is manipulated by actions that can introduce new objects from an infinite domain. We show that, in general, plan existence over KABs is undecidable even under severe restrictions. We then focus on the class of state-bounded KABs, for which plan existence is decidable, and we provide sound and complete plan synthesis algorithms, through a novel combination of techniques based on standard planning, DL query answering, and finite-state abstractions. All results hold for any DL with decidable query answering. We finally show that for lightweight DLs, plan synthesis can be compiled into standard ADL planning. © 2016, CEUR-WS. All rights reserved
Symbolic Logic meets Machine Learning: A Brief Survey in Infinite Domains
The tension between deduction and induction is perhaps the most fundamental
issue in areas such as philosophy, cognition and artificial intelligence (AI).
The deduction camp concerns itself with questions about the expressiveness of
formal languages for capturing knowledge about the world, together with proof
systems for reasoning from such knowledge bases. The learning camp attempts to
generalize from examples about partial descriptions about the world. In AI,
historically, these camps have loosely divided the development of the field,
but advances in cross-over areas such as statistical relational learning,
neuro-symbolic systems, and high-level control have illustrated that the
dichotomy is not very constructive, and perhaps even ill-formed. In this
article, we survey work that provides further evidence for the connections
between logic and learning. Our narrative is structured in terms of three
strands: logic versus learning, machine learning for logic, and logic for
machine learning, but naturally, there is considerable overlap. We place an
emphasis on the following "sore" point: there is a common misconception that
logic is for discrete properties, whereas probability theory and machine
learning, more generally, is for continuous properties. We report on results
that challenge this view on the limitations of logic, and expose the role that
logic can play for learning in infinite domains
- …