60,133 research outputs found
Strict coherence on many valued-events
We investigate the property of strict coherence in the setting of many-valued logics. Our main results read as follows: (i) a map from an MV-algebra to [0,1] is strictly coherent if and only if it satisfies Carnap\u2019s regularity condition, and (ii) a [0,1]-valued book on a finite set of many-valued events is strictly coherent if and only if it extends to a faithful state of an MV-algebra that contains them. Remarkably this latter result allows us to relax the rather demanding conditions for the Shimony-Kemeny characterisation of strict coherence put forward in the mid 1950s in this Journal
Probabilistic entailment in the setting of coherence: The role of quasi conjunction and inclusion relation
In this paper, by adopting a coherence-based probabilistic approach to
default reasoning, we focus the study on the logical operation of quasi
conjunction and the Goodman-Nguyen inclusion relation for conditional events.
We recall that quasi conjunction is a basic notion for defining consistency of
conditional knowledge bases. By deepening some results given in a previous
paper we show that, given any finite family of conditional events F and any
nonempty subset S of F, the family F p-entails the quasi conjunction C(S);
then, given any conditional event E|H, we analyze the equivalence between
p-entailment of E|H from F and p-entailment of E|H from C(S), where S is some
nonempty subset of F. We also illustrate some alternative theorems related with
p-consistency and p-entailment. Finally, we deepen the study of the connections
between the notions of p-entailment and inclusion relation by introducing for a
pair (F,E|H) the (possibly empty) class K of the subsets S of F such that C(S)
implies E|H. We show that the class K satisfies many properties; in particular
K is additive and has a greatest element which can be determined by applying a
suitable algorithm
Updating beliefs with incomplete observations
Currently, there is renewed interest in the problem, raised by Shafer in
1985, of updating probabilities when observations are incomplete. This is a
fundamental problem in general, and of particular interest for Bayesian
networks. Recently, Grunwald and Halpern have shown that commonly used updating
strategies fail in this case, except under very special assumptions. In this
paper we propose a new method for updating probabilities with incomplete
observations. Our approach is deliberately conservative: we make no assumptions
about the so-called incompleteness mechanism that associates complete with
incomplete observations. We model our ignorance about this mechanism by a
vacuous lower prevision, a tool from the theory of imprecise probabilities, and
we use only coherence arguments to turn prior into posterior probabilities. In
general, this new approach to updating produces lower and upper posterior
probabilities and expectations, as well as partially determinate decisions.
This is a logical consequence of the existing ignorance about the
incompleteness mechanism. We apply the new approach to the problem of
classification of new evidence in probabilistic expert systems, where it leads
to a new, so-called conservative updating rule. In the special case of Bayesian
networks constructed using expert knowledge, we provide an exact algorithm for
classification based on our updating rule, which has linear-time complexity for
a class of networks wider than polytrees. This result is then extended to the
more general framework of credal networks, where computations are often much
harder than with Bayesian nets. Using an example, we show that our rule appears
to provide a solid basis for reliable updating with incomplete observations,
when no strong assumptions about the incompleteness mechanism are justified.Comment: Replaced with extended versio
From imprecise probability assessments to conditional probabilities with quasi additive classes of conditioning events
In this paper, starting from a generalized coherent (i.e. avoiding uniform
loss) intervalvalued probability assessment on a finite family of conditional
events, we construct conditional probabilities with quasi additive classes of
conditioning events which are consistent with the given initial assessment.
Quasi additivity assures coherence for the obtained conditional probabilities.
In order to reach our goal we define a finite sequence of conditional
probabilities by exploiting some theoretical results on g-coherence. In
particular, we use solutions of a finite sequence of linear systems.Comment: Appears in Proceedings of the Twenty-Eighth Conference on Uncertainty
in Artificial Intelligence (UAI2012
Lexicographic choice functions
We investigate a generalisation of the coherent choice functions considered
by Seidenfeld et al. (2010), by sticking to the convexity axiom but imposing no
Archimedeanity condition. We define our choice functions on vector spaces of
options, which allows us to incorporate as special cases both Seidenfeld et
al.'s (2010) choice functions on horse lotteries and sets of desirable gambles
(Quaeghebeur, 2014), and to investigate their connections. We show that choice
functions based on sets of desirable options (gambles) satisfy Seidenfeld's
convexity axiom only for very particular types of sets of desirable options,
which are in a one-to-one relationship with the lexicographic probabilities. We
call them lexicographic choice functions. Finally, we prove that these choice
functions can be used to determine the most conservative convex choice function
associated with a given binary relation.Comment: 27 page
Precise Propagation of Upper and Lower Probability Bounds in System P
In this paper we consider the inference rules of System P in the framework of
coherent imprecise probabilistic assessments. Exploiting our algorithms, we
propagate the lower and upper probability bounds associated with the
conditional assertions of a given knowledge base, automatically obtaining the
precise probability bounds for the derived conclusions of the inference rules.
This allows a more flexible and realistic use of System P in default reasoning
and provides an exact illustration of the degradation of the inference rules
when interpreted in probabilistic terms. We also examine the disjunctive Weak
Rational Monotony of System P+ proposed by Adams in his extended probability
logic.Comment: 8 pages -8th Intl. Workshop on Non-Monotonic Reasoning NMR'2000,
April 9-11, Breckenridge, Colorad
Decision-Making in the Context of Imprecise Probabilistic Beliefs
Coherent imprecise probabilistic beliefs are modelled as incomplete comparative likelihood relations admitting a multiple-prior representation. Under a structural assumption of Equidivisibility, we provide an axiomatization of such relations and show uniqueness of the representation. In the second part of the paper, we formulate a behaviorally general axiom relating preferences and probabilistic beliefs which implies that preferences over unambiguous acts are probabilistically sophisticated and which entails representability of preferences over Savage acts in an Anscombe-Aumann-style framework. The motivation for an explicit and separate axiomatization of beliefs for the study of decision-making under ambiguity is discussed in some detail.
2-coherent and 2-convex Conditional Lower Previsions
In this paper we explore relaxations of (Williams) coherent and convex
conditional previsions that form the families of -coherent and -convex
conditional previsions, at the varying of . We investigate which such
previsions are the most general one may reasonably consider, suggesting
(centered) -convex or, if positive homogeneity and conjugacy is needed,
-coherent lower previsions. Basic properties of these previsions are
studied. In particular, we prove that they satisfy the Generalized Bayes Rule
and always have a -convex or, respectively, -coherent natural extension.
The role of these extensions is analogous to that of the natural extension for
coherent lower previsions. On the contrary, -convex and -coherent
previsions with either are convex or coherent themselves or have no
extension of the same type on large enough sets. Among the uncertainty concepts
that can be modelled by -convexity, we discuss generalizations of capacities
and niveloids to a conditional framework and show that the well-known risk
measure Value-at-Risk only guarantees to be centered -convex. In the final
part, we determine the rationality requirements of -convexity and
-coherence from a desirability perspective, emphasising how they weaken
those of (Williams) coherence.Comment: This is the authors' version of a work that was accepted for
publication in the International Journal of Approximate Reasoning, vol. 77,
October 2016, pages 66-86, doi:10.1016/j.ijar.2016.06.003,
http://www.sciencedirect.com/science/article/pii/S0888613X1630079
- …