223 research outputs found

    Reasoning about uncertainty and explicit ignorance in generalized possibilistic logic

    Full text link
    © 2014 The Authors and IOS Press. Generalized possibilistic logic (GPL) is a logic for reasoning about the revealed beliefs of another agent. It is a two-tier propositional logic, in which propositional formulas are encapsulated by modal operators that are interpreted in terms of uncertainty measures from possibility theory. Models of a GPL theory represent weighted epistemic states and are encoded as possibility distributions. One of the main features of GPL is that it allows us to explicitly reason about the ignorance of another agent. In this paper, we study two types of approaches for reasoning about ignorance in GPL, based on the idea of minimal specificity and on the notion of guaranteed possibility, respectively. We show how these approaches naturally lead to different flavours of the language of GPL and a number of decision problems, whose complexity ranges from the first to the third level of the polynomial hierarchy

    Weighted logics for artificial intelligence : an introductory discussion

    Get PDF
    International audienceBefore presenting the contents of the special issue, we propose a structured introductory overview of a landscape of the weighted logics (in a general sense) that can be found in the Artificial Intelligence literature, highlighting their fundamental differences and their application areas

    A simple logic for reasoning about incomplete knowledge

    Get PDF
    International audienceThe semantics of modal logics for reasoning about belief or knowledge is often described in terms of accessibility relations, which is too expressive to account for mere epistemic states of an agent. This paper proposes a simple logic whose atoms express epistemic attitudes about formulae expressed in another basic propositional language, and that allows for conjunctions, disjunctions and negations of belief or knowledge statements. It allows an agent to reason about what is known about the beliefs held by another agent. This simple epistemic logic borrows its syntax and axioms from the modal logic KD. It uses only a fragment of the S5 language, which makes it a two-tiered propositional logic rather than as an extension thereof. Its semantics is given in terms of epistemic states understood as subsets of mutually exclusive propositional interpretations. Our approach offers a logical grounding to uncertainty theories like possibility theory and belief functions. In fact, we define the most basic logic for possibility theory as shown by a completeness proof that does not rely on accessibility relations

    Extending Answer Set Programming using Generalized Possibilistic Logic

    Get PDF
    This international workshop is one of the joint ontology workshops JOWO 2015 affiliated with the 24th International Joint Conference on Artificial Intelligence (IJCAI-2015)International audienceAnswer set programming (ASP) is a form of logic programming in which negation-as-failure is defined in a purely declarative way, based on the notion of a stable model. This short paper briefly explains how a recent generalization of possibilistic logic (GPL) can be used to characterize the semantics of answer set programming. This characterization has several advantages over existing characterizations of the stable model semantics. First, unlike reduct-based approaches, it does not rely on a syntactic procedure: we can directly characterize answer sets based on the minimally specific models of a GPL theory. Second, GPL enables us to study extensions of ASP in an intuitive way: unlike in existing generalizations of ASP such as equilibrium logic and autoepistemic logic, all formulas in GPL have a meaning which is intuitively clear. Finally, being based on possibilistic logic, GPL offers a natural way of dealing with uncertainty in answer set programs

    Scenarios, probability and possible futures

    Get PDF
    This paper provides an introduction to the mathematical theory of possibility, and examines how this tool can contribute to the analysis of far distant futures. The degree of mathematical possibility of a future is a number between O and 1. It quantifies the extend to which a future event is implausible or surprising, without implying that it has to happen somehow. Intuitively, a degree of possibility can be seen as the upper bound of a range of admissible probability levels which goes all the way down to zero. Thus, the proposition `The possibility of X is Pi(X) can be read as `The probability of X is not greater than Pi(X).Possibility levels offers a measure to quantify the degree of unlikelihood of far distant futures. It offers an alternative between forecasts and scenarios, which are both problematic. Long range planning using forecasts with precise probabilities is problematic because it tends to suggests a false degree of precision. Using scenarios without any quantified uncertainty levels is problematic because it may lead to unjustified attention to the extreme scenarios.This paper further deals with the question of extreme cases. It examines how experts should build a set of two to four well contrasted and precisely described futures that summarizes in a simple way their knowledge. Like scenario makers, these experts face multiple objectives: they have to anchor their analysis in credible expertise; depict though-provoking possible futures; but not so provocative as to be dismissed out-of-hand. The first objective can be achieved by describing a future of possibility level 1. The second and third objective, however, balance each other. We find that a satisfying balance can be achieved by selecting extreme cases that do not rule out equiprobability. For example, if there are three cases, the possibility level of extremes should be about 1/3.Futures, futurible, scenarios, possibility, imprecise probabilities, uncertainty, fuzzy logic

    Possibilistic reasoning with partially ordered beliefs

    Get PDF
    International audienceThis paper presents the extension of results on reasoning with totally ordered belief bases to the partially ordered case. The idea is to reason from logical bases equipped with a partial order expressing relative certainty and to construct a partially ordered deductive closure. The difficult point lies in the fact that equivalent definitions in the totally ordered case are no longer equivalent in the partially ordered one. At the syntactic level we can either use a language expressing pairs of related formulas and axioms describing the properties of the ordering, or use formulas with partially ordered symbolic weights attached to them in the spirit of possibilistic logic. A possible semantics consists in assuming the partial order on formulas stems from a partial order on interpretations. It requires the capability of inducing a partial order on subsets of a set from a partial order on its elements so as to extend possibility theory functions. Among different possible definitions of induced partial order relations, we select the one generalizing necessity orderings (closely related to epistemic entrenchments). We study such a semantic approach inspired from possibilistic logic, and show its limitations when relying on a unique partial order on interpretations. We propose a more general sound and complete approach to relative certainty, inspired by conditional modal logics, in order to get a partial order on the whole propositional language. Some links between several inference systems, namely conditional logic, modal epistemic logic and non-monotonic preferential inference are established. Possibilistic logic with partially ordered symbolic weights is also revisited and a comparison with the relative certainty approach is made via mutual translations

    A Probabilistic Modelling Approach for Rational Belief in Meta-Epistemic Contexts

    Get PDF
    This work is part of the larger project INTEGRITY. Integrity develops a conceptual frame integrating beliefs with individual (and consensual group) decision making and action based on belief awareness. Comments and criticisms are most welcome via email. The text introduces the conceptual (internalism, externalism), quantitative (probabilism) and logical perspectives (logics for reasoning about probabilities by Fagin, Halpern, Megiddo and MEL by Banerjee, Dubois) for the framework

    A Probabilistic Modelling Approach for Rational Belief in Meta-Epistemic Contexts

    Get PDF
    This work is part of the larger project INTEGRITY. Integrity develops a conceptual frame integrating beliefs with individual (and consensual group) decision making and action based on belief awareness. Comments and criticisms are most welcome via email. Starting with a thorough discussion of the conceptual embedding in existing schools of thought and liter- ature we develop a framework that aims to be empirically adequate yet scalable to epistemic states where an agent might testify to uncertainly believe a propositional formula based on the acceptance that a propositional formula is possible, called accepted truth. The familiarity of human agents with probability assignments make probabilism particularly appealing as quantitative modelling framework for defeasible reasoning that aspires empirical adequacy for gradual belief expressed as credence functions. We employ the inner measure induced by the probability measure, going back to Halmos, interpreted as estimate for uncertainty. Doing so omits generally requiring direct probability assignments testiïżœed as strength of belief and uncertainty by a human agent. We provide a logical setting of the two concepts uncertain belief and accepted truth, completely relying on the the formal frameworks of 'Reasoning about Probabilities' developed by Fagin, Halpern and Megiddo and the 'Metaepistemic logic MEL' developed by Banerjee and Dubois. The purport of Probabilistic Uncertainty is a framework allowing with a single quantitative concept (an inner measure induced by a probability measure) expressing two epistemological concepts: possibilities as belief simpliciter called accepted truth, and the agents' credence called uncertain belief for a criterion of evaluation, called rationality. The propositions accepted to be possible form the meta-epistemic context(s) in which the agent can reason and testify uncertain belief or suspend judgement

    Informational Paradigm, management of uncertainty and theoretical formalisms in the clustering framework: A review

    Get PDF
    Fifty years have gone by since the publication of the first paper on clustering based on fuzzy sets theory. In 1965, L.A. Zadeh had published “Fuzzy Sets” [335]. After only one year, the first effects of this seminal paper began to emerge, with the pioneering paper on clustering by Bellman, Kalaba, Zadeh [33], in which they proposed a prototypal of clustering algorithm based on the fuzzy sets theory
    • 

    corecore