3,205 research outputs found
Belief Revision with Uncertain Inputs in the Possibilistic Setting
This paper discusses belief revision under uncertain inputs in the framework
of possibility theory. Revision can be based on two possible definitions of the
conditioning operation, one based on min operator which requires a purely
ordinal scale only, and another based on product, for which a richer structure
is needed, and which is a particular case of Dempster's rule of conditioning.
Besides, revision under uncertain inputs can be understood in two different
ways depending on whether the input is viewed, or not, as a constraint to
enforce. Moreover, it is shown that M.A. Williams' transmutations, originally
defined in the setting of Spohn's functions, can be captured in this framework,
as well as Boutilier's natural revision.Comment: Appears in Proceedings of the Twelfth Conference on Uncertainty in
Artificial Intelligence (UAI1996
Numerical Representations of Acceptance
Accepting a proposition means that our confidence in this proposition is
strictly greater than the confidence in its negation. This paper investigates
the subclass of uncertainty measures, expressing confidence, that capture the
idea of acceptance, what we call acceptance functions. Due to the monotonicity
property of confidence measures, the acceptance of a proposition entails the
acceptance of any of its logical consequences. In agreement with the idea that
a belief set (in the sense of Gardenfors) must be closed under logical
consequence, it is also required that the separate acceptance o two
propositions entail the acceptance of their conjunction. Necessity (and
possibility) measures agree with this view of acceptance while probability and
belief functions generally do not. General properties of acceptance functions
are estabilished. The motivation behind this work is the investigation of a
setting for belief revision more general than the one proposed by Alchourron,
Gardenfors and Makinson, in connection with the notion of conditioning.Comment: Appears in Proceedings of the Eleventh Conference on Uncertainty in
Artificial Intelligence (UAI1995
Estimations of expectedness and potential surprise in possibility theory
This note investigates how various ideas of 'expectedness' can be captured in the framework of possibility theory. Particularly, we are interested in trying to introduce estimates of the kind of lack of surprise expressed by people when saying 'I would not be surprised that...' before an event takes place, or by saying 'I knew it' after its realization. In possibility theory, a possibility distribution is supposed to model the relative levels of mutually exclusive alternatives in a set, or equivalently, the alternatives are assumed to be rank-ordered according to their level of possibility to take place. Four basic set-functions associated with a possibility distribution, including standard possibility and necessity measures, are discussed from the point of view of what they estimate when applied to potential events. Extensions of these estimates based on the notions of Q-projection or OWA operators are proposed when only significant parts of the possibility distribution are retained in the evaluation. The case of partially-known possibility distributions is also considered. Some potential applications are outlined
Coping with the Limitations of Rational Inference in the Framework of Possibility Theory
Possibility theory offers a framework where both Lehmann's "preferential
inference" and the more productive (but less cautious) "rational closure
inference" can be represented. However, there are situations where the second
inference does not provide expected results either because it cannot produce
them, or even provide counter-intuitive conclusions. This state of facts is not
due to the principle of selecting a unique ordering of interpretations (which
can be encoded by one possibility distribution), but rather to the absence of
constraints expressing pieces of knowledge we have implicitly in mind. It is
advocated in this paper that constraints induced by independence information
can help finding the right ordering of interpretations. In particular,
independence constraints can be systematically assumed with respect to formulas
composed of literals which do not appear in the conditional knowledge base, or
for default rules with respect to situations which are "normal" according to
the other default rules in the base. The notion of independence which is used
can be easily expressed in the qualitative setting of possibility theory.
Moreover, when a counter-intuitive plausible conclusion of a set of defaults,
is in its rational closure, but not in its preferential closure, it is always
possible to repair the set of defaults so as to produce the desired conclusion.Comment: Appears in Proceedings of the Twelfth Conference on Uncertainty in
Artificial Intelligence (UAI1996
- …