244 research outputs found

    Belief Change in Reasoning Agents: Axiomatizations, Semantics and Computations

    Get PDF
    The capability of changing beliefs upon new information in a rational and efficient way is crucial for an intelligent agent. Belief change therefore is one of the central research fields in Artificial Intelligence (AI) for over two decades. In the AI literature, two different kinds of belief change operations have been intensively investigated: belief update, which deal with situations where the new information describes changes of the world; and belief revision, which assumes the world is static. As another important research area in AI, reasoning about actions mainly studies the problem of representing and reasoning about effects of actions. These two research fields are closely related and apply a common underlying principle, that is, an agent should change its beliefs (knowledge) as little as possible whenever an adjustment is necessary. This lays down the possibility of reusing the ideas and results of one field in the other, and vice verse. This thesis aims to develop a general framework and devise computational models that are applicable in reasoning about actions. Firstly, I shall propose a new framework for iterated belief revision by introducing a new postulate to the existing AGM/DP postulates, which provides general criteria for the design of iterated revision operators. Secondly, based on the new framework, a concrete iterated revision operator is devised. The semantic model of the operator gives nice intuitions and helps to show its satisfiability of desirable postulates. I also show that the computational model of the operator is almost optimal in time and space-complexity. In order to deal with the belief change problem in multi-agent systems, I introduce a concept of mutual belief revision which is concerned with information exchange among agents. A concrete mutual revision operator is devised by generalizing the iterated revision operator. Likewise, a semantic model is used to show the intuition and many nice properties of the mutual revision operator, and the complexity of its computational model is formally analyzed. Finally, I present a belief update operator, which takes into account two important problems of reasoning about action, i.e., disjunctive updates and domain constraints. Again, the updated operator is presented with both a semantic model and a computational model

    Characterizing and Extending Answer Set Semantics using Possibility Theory

    Full text link
    Answer Set Programming (ASP) is a popular framework for modeling combinatorial problems. However, ASP cannot easily be used for reasoning about uncertain information. Possibilistic ASP (PASP) is an extension of ASP that combines possibilistic logic and ASP. In PASP a weight is associated with each rule, where this weight is interpreted as the certainty with which the conclusion can be established when the body is known to hold. As such, it allows us to model and reason about uncertain information in an intuitive way. In this paper we present new semantics for PASP, in which rules are interpreted as constraints on possibility distributions. Special models of these constraints are then identified as possibilistic answer sets. In addition, since ASP is a special case of PASP in which all the rules are entirely certain, we obtain a new characterization of ASP in terms of constraints on possibility distributions. This allows us to uncover a new form of disjunction, called weak disjunction, that has not been previously considered in the literature. In addition to introducing and motivating the semantics of weak disjunction, we also pinpoint its computational complexity. In particular, while the complexity of most reasoning tasks coincides with standard disjunctive ASP, we find that brave reasoning for programs with weak disjunctions is easier.Comment: 39 pages and 16 pages appendix with proofs. This article has been accepted for publication in Theory and Practice of Logic Programming, Copyright Cambridge University Pres

    A Probabilistic Modelling Approach for Rational Belief in Meta-Epistemic Contexts

    Get PDF
    This work is part of the larger project INTEGRITY. Integrity develops a conceptual frame integrating beliefs with individual (and consensual group) decision making and action based on belief awareness. Comments and criticisms are most welcome via email. The text introduces the conceptual (internalism, externalism), quantitative (probabilism) and logical perspectives (logics for reasoning about probabilities by Fagin, Halpern, Megiddo and MEL by Banerjee, Dubois) for the framework

    A Probabilistic Modelling Approach for Rational Belief in Meta-Epistemic Contexts

    Get PDF
    This work is part of the larger project INTEGRITY. Integrity develops a conceptual frame integrating beliefs with individual (and consensual group) decision making and action based on belief awareness. Comments and criticisms are most welcome via email. Starting with a thorough discussion of the conceptual embedding in existing schools of thought and liter- ature we develop a framework that aims to be empirically adequate yet scalable to epistemic states where an agent might testify to uncertainly believe a propositional formula based on the acceptance that a propositional formula is possible, called accepted truth. The familiarity of human agents with probability assignments make probabilism particularly appealing as quantitative modelling framework for defeasible reasoning that aspires empirical adequacy for gradual belief expressed as credence functions. We employ the inner measure induced by the probability measure, going back to Halmos, interpreted as estimate for uncertainty. Doing so omits generally requiring direct probability assignments testi�ed as strength of belief and uncertainty by a human agent. We provide a logical setting of the two concepts uncertain belief and accepted truth, completely relying on the the formal frameworks of 'Reasoning about Probabilities' developed by Fagin, Halpern and Megiddo and the 'Metaepistemic logic MEL' developed by Banerjee and Dubois. The purport of Probabilistic Uncertainty is a framework allowing with a single quantitative concept (an inner measure induced by a probability measure) expressing two epistemological concepts: possibilities as belief simpliciter called accepted truth, and the agents' credence called uncertain belief for a criterion of evaluation, called rationality. The propositions accepted to be possible form the meta-epistemic context(s) in which the agent can reason and testify uncertain belief or suspend judgement

    Contextual and Possibilistic Reasoning for Coalition Formation

    Get PDF
    In multiagent systems, agents often have to rely on other agents to reach their goals, for example when they lack a needed resource or do not have the capability to perform a required action. Agents therefore need to cooperate. Then, some of the questions raised are: Which agent(s) to cooperate with? What are the potential coalitions in which agents can achieve their goals? As the number of possibilities is potentially quite large, how to automate the process? And then, how to select the most appropriate coalition, taking into account the uncertainty in the agents' abilities to carry out certain tasks? In this article, we address the question of how to find and evaluate coalitions among agents in multiagent systems using MCS tools, while taking into consideration the uncertainty around the agents' actions. Our methodology is the following: We first compute the solution space for the formation of coalitions using a contextual reasoning approach. Second, we model agents as contexts in Multi-Context Systems (MCS), and dependence relations among agents seeking to achieve their goals, as bridge rules. Third, we systematically compute all potential coalitions using algorithms for MCS equilibria, and given a set of functional and non-functional requirements, we propose ways to select the best solutions. Finally, in order to handle the uncertainty in the agents' actions, we extend our approach with features of possibilistic reasoning. We illustrate our approach with an example from robotics

    A Probabilistic Modelling Approach for Rational Belief in Meta-Epistemic Contexts

    Get PDF
    This work is part of the larger project INTEGRITY. Integrity develops a conceptual frame integrating beliefs with individual (and consensual group) decision making and action based on belief awareness. Comments and criticisms are most welcome via email. The text introduces the conceptual (internalism, externalism), quantitative (probabilism) and logical perspectives (logics for reasoning about probabilities by Fagin, Halpern, Megiddo and MEL by Banerjee, Dubois) for the framework

    A Probabilistic Modelling Approach for Rational Belief in Meta-Epistemic Contexts

    Get PDF
    This work is part of the larger project INTEGRITY. Integrity develops a conceptual frame integrating beliefs with individual (and consensual group) decision making and action based on belief awareness. Comments and criticisms are most welcome via email. Starting with a thorough discussion of the conceptual embedding in existing schools of thought and liter- ature we develop a framework that aims to be empirically adequate yet scalable to epistemic states where an agent might testify to uncertainly believe a propositional formula based on the acceptance that a propositional formula is possible, called accepted truth. The familiarity of human agents with probability assignments make probabilism particularly appealing as quantitative modelling framework for defeasible reasoning that aspires empirical adequacy for gradual belief expressed as credence functions. We employ the inner measure induced by the probability measure, going back to Halmos, interpreted as estimate for uncertainty. Doing so omits generally requiring direct probability assignments testi�ed as strength of belief and uncertainty by a human agent. We provide a logical setting of the two concepts uncertain belief and accepted truth, completely relying on the the formal frameworks of 'Reasoning about Probabilities' developed by Fagin, Halpern and Megiddo and the 'Metaepistemic logic MEL' developed by Banerjee and Dubois. The purport of Probabilistic Uncertainty is a framework allowing with a single quantitative concept (an inner measure induced by a probability measure) expressing two epistemological concepts: possibilities as belief simpliciter called accepted truth, and the agents' credence called uncertain belief for a criterion of evaluation, called rationality. The propositions accepted to be possible form the meta-epistemic context(s) in which the agent can reason and testify uncertain belief or suspend judgement

    Managing different sources of uncertainty in a BDI framework in a principled way with tractable fragments

    Get PDF
    The Belief-Desire-Intention (BDI) architecture is a practical approach for modelling large-scale intelligent systems. In the BDI setting, a complex system is represented as a network of interacting agents – or components – each one modelled based on its beliefs, desires and intentions. However, current BDI implementations are not well-suited for modelling more realistic intelligent systems which operate in environments pervaded by different types of uncertainty. Furthermore, existing approaches for dealing with uncertainty typically do not offer syntactical or tractable ways of reasoning about uncertainty. This complicates their integration with BDI implementations, which heavily rely on fast and reactive decisions. In this paper, we advance the state-of-the-art w.r.t. handling different types of uncertainty in BDI agents. The contributions of this paper are, first, a new way of modelling the beliefs of an agent as a set of epistemic states. Each epistemic state can use a distinct underlying uncertainty theory and revision strategy, and commensurability between epistemic states is achieved through a stratification approach. Second, we present a novel syntactic approach to revising beliefs given unreliable input. We prove that this syntactic approach agrees with the semantic definition, and we identify expressive fragments that are particularly useful for resource-bounded agents. Third, we introduce full operational semantics that extend Can, a popular semantics for BDI, to establish how reasoning about uncertainty can be tightly integrated into the BDI framework. Fourth, we provide comprehensive experimental results to highlight the usefulness and feasibility of our approach, and explain how the generic epistemic state can be instantiated into various representations

    Learning Possibilistic Logic Theories

    Get PDF
    Vi tar opp problemet med å lære tolkbare maskinlæringsmodeller fra usikker og manglende informasjon. Vi utvikler først en ny dyplæringsarkitektur, RIDDLE: Rule InDuction with Deep LEarning (regelinduksjon med dyp læring), basert på egenskapene til mulighetsteori. Med eksperimentelle resultater og sammenligning med FURIA, en eksisterende moderne metode for regelinduksjon, er RIDDLE en lovende regelinduksjonsalgoritme for å finne regler fra data. Deretter undersøker vi læringsoppgaven formelt ved å identifisere regler med konfidensgrad knyttet til dem i exact learning-modellen. Vi definerer formelt teoretiske rammer og viser forhold som må holde for å garantere at en læringsalgoritme vil identifisere reglene som holder i et domene. Til slutt utvikler vi en algoritme som lærer regler med tilhørende konfidensverdier i exact learning-modellen. Vi foreslår også en teknikk for å simulere spørringer i exact learning-modellen fra data. Eksperimenter viser oppmuntrende resultater for å lære et sett med regler som tilnærmer reglene som er kodet i data.We address the problem of learning interpretable machine learning models from uncertain and missing information. We first develop a novel deep learning architecture, named RIDDLE (Rule InDuction with Deep LEarning), based on properties of possibility theory. With experimental results and comparison with FURIA, a state of the art method, RIDDLE is a promising rule induction algorithm for finding rules from data. We then formally investigate the learning task of identifying rules with confidence degree associated to them in the exact learning model. We formally define theoretical frameworks and show conditions that must hold to guarantee that a learning algorithm will identify the rules that hold in a domain. Finally, we develop an algorithm that learns rules with associated confidence values in the exact learning model. We also propose a technique to simulate queries in the exact learning model from data. Experiments show encouraging results to learn a set of rules that approximate rules encoded in data.Doktorgradsavhandlin

    Borderline vs. unknown: comparing three-valued representations of imperfect information

    Get PDF
    International audienceIn this paper we compare the expressive power of elementary representation formats for vague, incomplete or conflicting information. These include Boolean valuation pairs introduced by Lawry and González-Rodríguez, orthopairs of sets of variables, Boolean possibility and necessity measures, three-valued valuations, supervaluations. We make explicit their connections with strong Kleene logic and with Belnap logic of conflicting information. The formal similarities between 3-valued approaches to vagueness and formalisms that handle incomplete information often lead to a confusion between degrees of truth and degrees of uncertainty. Yet there are important differences that appear at the interpretive level: while truth-functional logics of vagueness are accepted by a part of the scientific community (even if questioned by supervaluationists), the truth-functionality assumption of three-valued calculi for handling incomplete information looks questionable, compared to the non-truth-functional approaches based on Boolean possibility–necessity pairs. This paper aims to clarify the similarities and differences between the two situations. We also study to what extent operations for comparing and merging information items in the form of orthopairs can be expressed by means of operations on valuation pairs, three-valued valuations and underlying possibility distributions
    • …
    corecore