17,780 research outputs found
Parameter Learning of Logic Programs for Symbolic-Statistical Modeling
We propose a logical/mathematical framework for statistical parameter
learning of parameterized logic programs, i.e. definite clause programs
containing probabilistic facts with a parameterized distribution. It extends
the traditional least Herbrand model semantics in logic programming to
distribution semantics, possible world semantics with a probability
distribution which is unconditionally applicable to arbitrary logic programs
including ones for HMMs, PCFGs and Bayesian networks. We also propose a new EM
algorithm, the graphical EM algorithm, that runs for a class of parameterized
logic programs representing sequential decision processes where each decision
is exclusive and independent. It runs on a new data structure called support
graphs describing the logical relationship between observations and their
explanations, and learns parameters by computing inside and outside probability
generalized for logic programs. The complexity analysis shows that when
combined with OLDT search for all explanations for observations, the graphical
EM algorithm, despite its generality, has the same time complexity as existing
EM algorithms, i.e. the Baum-Welch algorithm for HMMs, the Inside-Outside
algorithm for PCFGs, and the one for singly connected Bayesian networks that
have been developed independently in each research field. Learning experiments
with PCFGs using two corpora of moderate size indicate that the graphical EM
algorithm can significantly outperform the Inside-Outside algorithm
Categorical Modelling of Logic Programming: Coalgebra, Functorial Semantics, String Diagrams
Logic programming (LP) is driven by the idea that logic subsumes computation. Over the
past 50 years, along with the emergence of numerous logic systems, LP has also grown into a
large family, the members of which are designed to deal with various computation scenarios.
Among them, we focus on two of the most influential quantitative variants are probabilistic
logic programming (PLP) and weighted logic programming (WLP).
In this thesis, we investigate a uniform understanding of logic programming and its quan-
titative variants from the perspective of category theory. In particular, we explore both a
coalgebraic and an algebraic understanding of LP, PLP and WLP.
On the coalgebraic side, we propose a goal-directed strategy for calculating the probabilities
and weights of atoms in PLP and WLP programs, respectively. We then develop a coalgebraic
semantics for PLP and WLP, built on existing coalgebraic semantics for LP. By choosing
the appropriate functors representing probabilistic and weighted computation, such coalgeraic
semantics characterise exactly the goal-directed behaviour of PLP and WLP programs.
On the algebraic side, we define a functorial semantics of LP, PLP, and WLP, such that they
three share the same syntactic categories of string diagrams, and differ regarding to the semantic
categories according to their data/computation type. This allows for a uniform diagrammatic
expression for certain semantic constructs. Moreover, based on similar approaches to Bayesian
networks, this provides a framework to formalise the connection between PLP and Bayesian
networks. Furthermore, we prove a sound and complete aximatization of the semantic category
for LP, in terms of string diagrams. Together with the diagrammatic presentation of the
fixed point semantics, one obtain a decidable calculus for proving the equivalence between
propositional definite logic programs
Probabilistic Programming Concepts
A multitude of different probabilistic programming languages exists today,
all extending a traditional programming language with primitives to support
modeling of complex, structured probability distributions. Each of these
languages employs its own probabilistic primitives, and comes with a particular
syntax, semantics and inference procedure. This makes it hard to understand the
underlying programming concepts and appreciate the differences between the
different languages. To obtain a better understanding of probabilistic
programming, we identify a number of core programming concepts underlying the
primitives used by various probabilistic languages, discuss the execution
mechanisms that they require and use these to position state-of-the-art
probabilistic languages and their implementation. While doing so, we focus on
probabilistic extensions of logic programming languages such as Prolog, which
have been developed since more than 20 years
Quantitative Separation Logic - A Logic for Reasoning about Probabilistic Programs
We present quantitative separation logic (). In contrast to
classical separation logic, employs quantities which evaluate to
real numbers instead of predicates which evaluate to Boolean values. The
connectives of classical separation logic, separating conjunction and
separating implication, are lifted from predicates to quantities. This
extension is conservative: Both connectives are backward compatible to their
classical analogs and obey the same laws, e.g. modus ponens, adjointness, etc.
Furthermore, we develop a weakest precondition calculus for quantitative
reasoning about probabilistic pointer programs in . This calculus
is a conservative extension of both Reynolds' separation logic for
heap-manipulating programs and Kozen's / McIver and Morgan's weakest
preexpectations for probabilistic programs. Soundness is proven with respect to
an operational semantics based on Markov decision processes. Our calculus
preserves O'Hearn's frame rule, which enables local reasoning. We demonstrate
that our calculus enables reasoning about quantities such as the probability of
terminating with an empty heap, the probability of reaching a certain array
permutation, or the expected length of a list
Symbolic Exact Inference for Discrete Probabilistic Programs
The computational burden of probabilistic inference remains a hurdle for
applying probabilistic programming languages to practical problems of interest.
In this work, we provide a semantic and algorithmic foundation for efficient
exact inference on discrete-valued finite-domain imperative probabilistic
programs. We leverage and generalize efficient inference procedures for
Bayesian networks, which exploit the structure of the network to decompose the
inference task, thereby avoiding full path enumeration. To do this, we first
compile probabilistic programs to a symbolic representation. Then we adapt
techniques from the probabilistic logic programming and artificial intelligence
communities in order to perform inference on the symbolic representation. We
formalize our approach, prove it sound, and experimentally validate it against
existing exact and approximate inference techniques. We show that our inference
approach is competitive with inference procedures specialized for Bayesian
networks, thereby expanding the class of probabilistic programs that can be
practically analyzed
- …