16,556 research outputs found
Measurable Cones and Stable, Measurable Functions
We define a notion of stable and measurable map between cones endowed with
measurability tests and show that it forms a cpo-enriched cartesian closed
category. This category gives a denotational model of an extension of PCF
supporting the main primitives of probabilistic functional programming, like
continuous and discrete probabilistic distributions, sampling, conditioning and
full recursion. We prove the soundness and adequacy of this model with respect
to a call-by-name operational semantics and give some examples of its
denotations
Probabilistic Argumentation with Epistemic Extensions and Incomplete Information
Abstract argumentation offers an appealing way of representing and evaluating
arguments and counterarguments. This approach can be enhanced by a probability
assignment to each argument. There are various interpretations that can be
ascribed to this assignment. In this paper, we regard the assignment as
denoting the belief that an agent has that an argument is justifiable, i.e.,
that both the premises of the argument and the derivation of the claim of the
argument from its premises are valid. This leads to the notion of an epistemic
extension which is the subset of the arguments in the graph that are believed
to some degree (which we defined as the arguments that have a probability
assignment greater than 0.5). We consider various constraints on the
probability assignment. Some constraints correspond to standard notions of
extensions, such as grounded or stable extensions, and some constraints give us
new kinds of extensions
A Labelling Framework for Probabilistic Argumentation
The combination of argumentation and probability paves the way to new
accounts of qualitative and quantitative uncertainty, thereby offering new
theoretical and applicative opportunities. Due to a variety of interests,
probabilistic argumentation is approached in the literature with different
frameworks, pertaining to structured and abstract argumentation, and with
respect to diverse types of uncertainty, in particular the uncertainty on the
credibility of the premises, the uncertainty about which arguments to consider,
and the uncertainty on the acceptance status of arguments or statements.
Towards a general framework for probabilistic argumentation, we investigate a
labelling-oriented framework encompassing a basic setting for rule-based
argumentation and its (semi-) abstract account, along with diverse types of
uncertainty. Our framework provides a systematic treatment of various kinds of
uncertainty and of their relationships and allows us to back or question
assertions from the literature
Stable Model Counting and Its Application in Probabilistic Logic Programming
Model counting is the problem of computing the number of models that satisfy
a given propositional theory. It has recently been applied to solving inference
tasks in probabilistic logic programming, where the goal is to compute the
probability of given queries being true provided a set of mutually independent
random variables, a model (a logic program) and some evidence. The core of
solving this inference task involves translating the logic program to a
propositional theory and using a model counter. In this paper, we show that for
some problems that involve inductive definitions like reachability in a graph,
the translation of logic programs to SAT can be expensive for the purpose of
solving inference tasks. For such problems, direct implementation of stable
model semantics allows for more efficient solving. We present two
implementation techniques, based on unfounded set detection, that extend a
propositional model counter to a stable model counter. Our experiments show
that for particular problems, our approach can outperform a state-of-the-art
probabilistic logic programming solver by several orders of magnitude in terms
of running time and space requirements, and can solve instances of
significantly larger sizes on which the current solver runs out of time or
memory.Comment: Accepted in AAAI, 201
- …