22 research outputs found

    Multi-Agent Only Knowing

    Full text link
    Levesque introduced a notion of ``only knowing'', with the goal of capturing certain types of nonmonotonic reasoning. Levesque's logic dealt with only the case of a single agent. Recently, both Halpern and Lakemeyer independently attempted to extend Levesque's logic to the multi-agent case. Although there are a number of similarities in their approaches, there are some significant differences. In this paper, we reexamine the notion of only knowing, going back to first principles. In the process, we simplify Levesque's completeness proof, and point out some problems with the earlier definitions. This leads us to reconsider what the properties of only knowing ought to be. We provide an axiom system that captures our desiderata, and show that it has a semantics that corresponds to it. The axiom system has an added feature of interest: it includes a modal operator for satisfiability, and thus provides a complete axiomatization for satisfiability in the logic K45.Comment: To appear, Journal of Logic and Computatio

    Multi-Agent Only-Knowing Revisited

    Get PDF
    Levesque introduced the notion of only-knowing to precisely capture the beliefs of a knowledge base. He also showed how only-knowing can be used to formalize non-monotonic behavior within a monotonic logic. Despite its appeal, all attempts to extend only-knowing to the many agent case have undesirable properties. A belief model by Halpern and Lakemeyer, for instance, appeals to proof-theoretic constructs in the semantics and needs to axiomatize validity as part of the logic. It is also not clear how to generalize their ideas to a first-order case. In this paper, we propose a new account of multi-agent only-knowing which, for the first time, has a natural possible-world semantics for a quantified language with equality. We then provide, for the propositional fragment, a sound and complete axiomatization that faithfully lifts Levesque's proof theory to the many agent case. We also discuss comparisons to the earlier approach by Halpern and Lakemeyer.Comment: Appears in Principles of Knowledge Representation and Reasoning 201

    Multi-Agent Only Knowing on Planet Kripke

    Get PDF
    International audienceThe idea of only knowing is a natural and intuitive notion to precisely capture the beliefs of a knowledge base. However, an extension to the many agent case, as would be needed in many applications , has been shown to be far from straightforward. For example, previous Kripke frame-based accounts appeal to proof-theoretic constructions like canonical models, while more recent works in the area abandoned Kripke semantics entirely. We propose a new account based on Moss' characteristic formulas, formulated for the usual Kripke semantics. This is shown to come with other benefits: the logic admits a group version of only knowing , and an operator for assessing the epistemic en-trenchment of what an agent or a group only knows is definable. Finally, the multi-agent only knowing operator is shown to be expressible with the cover modality of classical modal logic, which then allows us to obtain a completeness result for a fragment of the logic

    Reasoning About Knowledge of Unawareness

    Full text link
    Awareness has been shown to be a useful addition to standard epistemic logic for many applications. However, standard propositional logics for knowledge and awareness cannot express the fact that an agent knows that there are facts of which he is unaware without there being an explicit fact that the agent knows he is unaware of. We propose a logic for reasoning about knowledge of unawareness, by extending Fagin and Halpern's \emph{Logic of General Awareness}. The logic allows quantification over variables, so that there is a formula in the language that can express the fact that ``an agent explicitly knows that there exists a fact of which he is unaware''. Moreover, that formula can be true without the agent explicitly knowing that he is unaware of any particular formula. We provide a sound and complete axiomatization of the logic, using standard axioms from the literature to capture the quantification operator. Finally, we show that the validity problem for the logic is recursively enumerable, but not decidable.Comment: 32 page

    Learnability with PAC Semantics for Multi-agent Beliefs

    Full text link
    The tension between deduction and induction is perhaps the most fundamental issue in areas such as philosophy, cognition and artificial intelligence. In an influential paper, Valiant recognised that the challenge of learning should be integrated with deduction. In particular, he proposed a semantics to capture the quality possessed by the output of Probably Approximately Correct (PAC) learning algorithms when formulated in a logic. Although weaker than classical entailment, it allows for a powerful model-theoretic framework for answering queries. In this paper, we provide a new technical foundation to demonstrate PAC learning with multi-agent epistemic logics. To circumvent the negative results in the literature on the difficulty of robust learning with the PAC semantics, we consider so-called implicit learning where we are able to incorporate observations to the background theory in service of deciding the entailment of an epistemic query. We prove correctness of the learning procedure and discuss results on the sample complexity, that is how many observations we will need to provably assert that the query is entailed given a user-specified error bound. Finally, we investigate under what circumstances this algorithm can be made efficient. On the last point, given that reasoning in epistemic logics especially in multi-agent epistemic logics is PSPACE-complete, it might seem like there is no hope for this problem. We leverage some recent results on the so-called Representation Theorem explored for single-agent and multi-agent epistemic logics with the only knowing operator to reduce modal reasoning to propositional reasoning

    Logique doxatique graduelle

    Get PDF
    La modélisation des croyances est un sujet très important de l'intelligence artificielle. Nous présentons ici une logique modale permettant de raisonner sur des croyances plus ou moins fortes d'un agent sur le système. Nous définissons un langage permettant de gradualiser les croyances : de la croyance faible jusqu'à la conviction en passant par divers degrés. Nous donnons une axiomatique et une sémantique (complète et adéquate) basée sur les modèles de Kripke. Nous montrons ensuite que toute formule peut se réduire à une formule sans modalités imbriquées. Nous définissons alors des modèles numériques basés sur les fonctions conditionnelles ordinales de Spohn. Reasoning about beliefs is an important issue in artificial intelligence. We present here a modal logic allowing for reasoning about more or less strong beliefs held by an agent. We define a language for graded beliefs. We give then an axiomatics and a semantics based on Kripke models, together with a soundness and completeness result. We show that any formula can be reduced to a formula without nested modalities. We discuss an alternative semantics based on Spohn's ordinal conditional functions

    Logic meets Probability: Towards Explainable AI Systems for Uncertain Worlds

    Get PDF
    Logical AI is concerned with formal languages to represent and reason with qualitative specifications; statistical AI is concerned with learning quantitative specifications from data. To combine the strengths of these two camps, there has been exciting recent progress on unifying logic and probability. We review the many guises for this union, while emphasizing the need for a formal language to represent a system's knowledge. Formal languages allow their internal properties to be robustly scrutinized, can be augmented by adding new knowledge, and are amenable to abstractions, all of which are vital to the design of intelligent systems that are explainable and interpretable.</jats:p
    corecore