2,265 research outputs found
A Labelling Framework for Probabilistic Argumentation
The combination of argumentation and probability paves the way to new
accounts of qualitative and quantitative uncertainty, thereby offering new
theoretical and applicative opportunities. Due to a variety of interests,
probabilistic argumentation is approached in the literature with different
frameworks, pertaining to structured and abstract argumentation, and with
respect to diverse types of uncertainty, in particular the uncertainty on the
credibility of the premises, the uncertainty about which arguments to consider,
and the uncertainty on the acceptance status of arguments or statements.
Towards a general framework for probabilistic argumentation, we investigate a
labelling-oriented framework encompassing a basic setting for rule-based
argumentation and its (semi-) abstract account, along with diverse types of
uncertainty. Our framework provides a systematic treatment of various kinds of
uncertainty and of their relationships and allows us to back or question
assertions from the literature
On the Implementation of the Probabilistic Logic Programming Language ProbLog
The past few years have seen a surge of interest in the field of
probabilistic logic learning and statistical relational learning. In this
endeavor, many probabilistic logics have been developed. ProbLog is a recent
probabilistic extension of Prolog motivated by the mining of large biological
networks. In ProbLog, facts can be labeled with probabilities. These facts are
treated as mutually independent random variables that indicate whether these
facts belong to a randomly sampled program. Different kinds of queries can be
posed to ProbLog programs. We introduce algorithms that allow the efficient
execution of these queries, discuss their implementation on top of the
YAP-Prolog system, and evaluate their performance in the context of large
networks of biological entities.Comment: 28 pages; To appear in Theory and Practice of Logic Programming
(TPLP
Modular Logic Programming: Full Compositionality and Conflict Handling for Practical Reasoning
With the recent development of a new ubiquitous nature of data and the profusity
of available knowledge, there is nowadays the need to reason from multiple sources
of often incomplete and uncertain knowledge. Our goal was to provide a way to
combine declarative knowledge bases – represented as logic programming modules
under the answer set semantics – as well as the individual results one already inferred
from them, without having to recalculate the results for their composition and without
having to explicitly know the original logic programming encodings that produced
such results. This posed us many challenges such as how to deal with fundamental
problems of modular frameworks for logic programming, namely how to define a
general compositional semantics that allows us to compose unrestricted modules.
Building upon existing logic programming approaches, we devised a framework
capable of composing generic logic programming modules while preserving the
crucial property of compositionality, which informally means that the combination of
models of individual modules are the models of the union of modules. We are also
still able to reason in the presence of knowledge containing incoherencies, which is
informally characterised by a logic program that does not have an answer set due
to cyclic dependencies of an atom from its default negation. In this thesis we also
discuss how the same approach can be extended to deal with probabilistic knowledge
in a modular and compositional way.
We depart from the Modular Logic Programming approach in Oikarinen &
Janhunen (2008); Janhunen et al. (2009) which achieved a restricted form of compositionality
of answer set programming modules. We aim at generalising this
framework of modular logic programming and start by lifting restrictive conditions
that were originally imposed, and use alternative ways of combining these (so called
by us) Generalised Modular Logic Programs. We then deal with conflicts arising
in generalised modular logic programming and provide modular justifications and
debugging for the generalised modular logic programming setting, where justification
models answer the question: Why is a given interpretation indeed an Answer Set?
and Debugging models answer the question: Why is a given interpretation not an
Answer Set?
In summary, our research deals with the problematic of formally devising a
generic modular logic programming framework, providing: operators for combining
arbitrary modular logic programs together with a compositional semantics; We
characterise conflicts that occur when composing access control policies, which are
generalisable to our context of generalised modular logic programming, and ways of
dealing with them syntactically: provided a unification for justification and debugging
of logic programs; and semantically: provide a new semantics capable of dealing
with incoherences. We also provide an extension of modular logic programming
to a probabilistic setting. These goals are already covered with published work. A prototypical tool implementing the unification of justifications and debugging is
available for download from http://cptkirk.sourceforge.net
PSPACE Bounds for Rank-1 Modal Logics
For lack of general algorithmic methods that apply to wide classes of logics,
establishing a complexity bound for a given modal logic is often a laborious
task. The present work is a step towards a general theory of the complexity of
modal logics. Our main result is that all rank-1 logics enjoy a shallow model
property and thus are, under mild assumptions on the format of their
axiomatisation, in PSPACE. This leads to a unified derivation of tight
PSPACE-bounds for a number of logics including K, KD, coalition logic, graded
modal logic, majority logic, and probabilistic modal logic. Our generic
algorithm moreover finds tableau proofs that witness pleasant proof-theoretic
properties including a weak subformula property. This generality is made
possible by a coalgebraic semantics, which conveniently abstracts from the
details of a given model class and thus allows covering a broad range of logics
in a uniform way
Probabilistic abstract interpretation: From trace semantics to DTMC’s and linear regression
In order to perform probabilistic program analysis we need to consider probabilistic languages or languages with a probabilistic semantics, as well as a corresponding framework for the analysis which is able to accommodate probabilistic properties and properties of probabilistic computations. To this purpose we investigate the relationship between three different types of probabilistic semantics for a core imperative language, namely Kozen’s Fixpoint Semantics, our Linear Operator Semantics and probabilistic versions of Maximal Trace Semantics. We also discuss the relationship between Probabilistic Abstract Interpretation (PAI) and statistical or linear regression analysis. While classical Abstract Interpretation, based on Galois connection, allows only for worst-case analyses, the use of the Moore-Penrose pseudo inverse in PAI opens the possibility of exploiting statistical and noisy observations in order to analyse and identify various system properties
An expectation transformer approach to predicate abstraction and data independence for probabilistic programs
In this paper we revisit the well-known technique of predicate abstraction to
characterise performance attributes of system models incorporating probability.
We recast the theory using expectation transformers, and identify transformer
properties which correspond to abstractions that yield nevertheless exact bound
on the performance of infinite state probabilistic systems. In addition, we
extend the developed technique to the special case of "data independent"
programs incorporating probability. Finally, we demonstrate the subtleness of
the extended technique by using the PRISM model checking tool to analyse an
infinite state protocol, obtaining exact bounds on its performance
- …