5,336 research outputs found
Learning Effect Axioms via Probabilistic Logic Programming
In this paper we showed how we can automatically learn the structure and parameters of probabilistic effect axioms for the Simple Event Calculus (SEC) from positive and negative example interpretations stated as short dialogue sequences in natural language. We used the cplint framework for this task that provides
libraries for structure and parameter learning and for answering queries with exact and inexact inference. The example dialogues that are used for learning the structure of the probabilistic logic program are parsed into dependency structures and then further translated into the Event Calculus notation with the help of a simple ontology. The novelty of our approach is that we can not only process uncertainty in event recognition but also learn the structure of effect axioms and combine these two sources of uncertainty to successfully answer queries under this probabilistic setting. Interestingly, our extension of the logic-based version of the SEC is completely elaboration-tolerant in the sense that the probabilistic version fully includes the logic-based version. This makes it possible to use the probabilistic version of the SEC in the traditional way as well as when we have to deal with uncertainty in the observed world. In the future, we would like to extend the probabilistic version of the SEC to deal -- among other things -- with concurrent actions and continuous change
Formal verification of higher-order probabilistic programs
Probabilistic programming provides a convenient lingua franca for writing
succinct and rigorous descriptions of probabilistic models and inference tasks.
Several probabilistic programming languages, including Anglican, Church or
Hakaru, derive their expressiveness from a powerful combination of continuous
distributions, conditioning, and higher-order functions. Although very
important for practical applications, these combined features raise fundamental
challenges for program semantics and verification. Several recent works offer
promising answers to these challenges, but their primary focus is on semantical
issues.
In this paper, we take a step further and we develop a set of program logics,
named PPV, for proving properties of programs written in an expressive
probabilistic higher-order language with continuous distributions and operators
for conditioning distributions by real-valued functions. Pleasingly, our
program logics retain the comfortable reasoning style of informal proofs thanks
to carefully selected axiomatizations of key results from probability theory.
The versatility of our logics is illustrated through the formal verification of
several intricate examples from statistics, probabilistic inference, and
machine learning. We further show the expressiveness of our logics by giving
sound embeddings of existing logics. In particular, we do this in a parametric
way by showing how the semantics idea of (unary and relational) TT-lifting can
be internalized in our logics. The soundness of PPV follows by interpreting
programs and assertions in quasi-Borel spaces (QBS), a recently proposed
variant of Borel spaces with a good structure for interpreting higher order
probabilistic programs
On Automating the Doctrine of Double Effect
The doctrine of double effect () is a long-studied ethical
principle that governs when actions that have both positive and negative
effects are to be allowed. The goal in this paper is to automate
. We briefly present , and use a first-order
modal logic, the deontic cognitive event calculus, as our framework to
formalize the doctrine. We present formalizations of increasingly stronger
versions of the principle, including what is known as the doctrine of triple
effect. We then use our framework to simulate successfully scenarios that have
been used to test for the presence of the principle in human subjects. Our
framework can be used in two different modes: One can use it to build
-compliant autonomous systems from scratch, or one can use it to
verify that a given AI system is -compliant, by applying a
layer on an existing system or model. For the latter mode, the
underlying AI system can be built using any architecture (planners, deep neural
networks, bayesian networks, knowledge-representation systems, or a hybrid); as
long as the system exposes a few parameters in its model, such verification is
possible. The role of the layer here is akin to a (dynamic or
static) software verifier that examines existing software modules. Finally, we
end by presenting initial work on how one can apply our layer
to the STRIPS-style planning model, and to a modified POMDP model.This is
preliminary work to illustrate the feasibility of the second mode, and we hope
that our initial sketches can be useful for other researchers in incorporating
DDE in their own frameworks.Comment: 26th International Joint Conference on Artificial Intelligence 2017;
Special Track on AI & Autonom
Disintegration and Bayesian Inversion via String Diagrams
The notions of disintegration and Bayesian inversion are fundamental in
conditional probability theory. They produce channels, as conditional
probabilities, from a joint state, or from an already given channel (in
opposite direction). These notions exist in the literature, in concrete
situations, but are presented here in abstract graphical formulations. The
resulting abstract descriptions are used for proving basic results in
conditional probability theory. The existence of disintegration and Bayesian
inversion is discussed for discrete probability, and also for measure-theoretic
probability --- via standard Borel spaces and via likelihoods. Finally, the
usefulness of disintegration and Bayesian inversion is illustrated in several
examples.Comment: Accepted for publication in Mathematical Structures in Computer
Scienc
- …