21,414 research outputs found
Learning Probabilistic Logic Programs in Continuous Domains
The field of statistical relational learning aims at unifying logic and
probability to reason and learn from data. Perhaps the most successful paradigm
in the field is probabilistic logic programming: the enabling of stochastic
primitives in logic programming, which is now increasingly seen to provide a
declarative background to complex machine learning applications. While many
systems offer inference capabilities, the more significant challenge is that of
learning meaningful and interpretable symbolic representations from data. In
that regard, inductive logic programming and related techniques have paved much
of the way for the last few decades.
Unfortunately, a major limitation of this exciting landscape is that much of
the work is limited to finite-domain discrete probability distributions.
Recently, a handful of systems have been extended to represent and perform
inference with continuous distributions. The problem, of course, is that
classical solutions for inference are either restricted to well-known
parametric families (e.g., Gaussians) or resort to sampling strategies that
provide correct answers only in the limit. When it comes to learning, moreover,
inducing representations remains entirely open, other than "data-fitting"
solutions that force-fit points to aforementioned parametric families.
In this paper, we take the first steps towards inducing probabilistic logic
programs for continuous and mixed discrete-continuous data, without being
pigeon-holed to a fixed set of distribution families. Our key insight is to
leverage techniques from piecewise polynomial function approximation theory,
yielding a principled way to learn and compositionally construct density
functions. We test the framework and discuss the learned representations.Comment: Accepted at the 2018 KR Workshop on Hybrid Reasoning and Learnin
Probabilistic Programming Concepts
A multitude of different probabilistic programming languages exists today,
all extending a traditional programming language with primitives to support
modeling of complex, structured probability distributions. Each of these
languages employs its own probabilistic primitives, and comes with a particular
syntax, semantics and inference procedure. This makes it hard to understand the
underlying programming concepts and appreciate the differences between the
different languages. To obtain a better understanding of probabilistic
programming, we identify a number of core programming concepts underlying the
primitives used by various probabilistic languages, discuss the execution
mechanisms that they require and use these to position state-of-the-art
probabilistic languages and their implementation. While doing so, we focus on
probabilistic extensions of logic programming languages such as Prolog, which
have been developed since more than 20 years
Bayesian Logic Programs
Bayesian networks provide an elegant formalism for representing and reasoning
about uncertainty using probability theory. Theyare a probabilistic extension
of propositional logic and, hence, inherit some of the limitations of
propositional logic, such as the difficulties to represent objects and
relations. We introduce a generalization of Bayesian networks, called Bayesian
logic programs, to overcome these limitations. In order to represent objects
and relations it combines Bayesian networks with definite clause logic by
establishing a one-to-one mapping between ground atoms and random variables. We
show that Bayesian logic programs combine the advantages of both definite
clause logic and Bayesian networks. This includes the separation of
quantitative and qualitative aspects of the model. Furthermore, Bayesian logic
programs generalize both Bayesian networks as well as logic programs. So, many
ideas developedComment: 52 page
Semantics, Modelling, and the Problem of Representation of Meaning -- a Brief Survey of Recent Literature
Over the past 50 years many have debated what representation should be used
to capture the meaning of natural language utterances. Recently new needs of
such representations have been raised in research. Here I survey some of the
interesting representations suggested to answer for these new needs.Comment: 15 pages, no figure
- …