8 research outputs found

    Multi-Agent Only-Knowing Revisited

    Get PDF
    Levesque introduced the notion of only-knowing to precisely capture the beliefs of a knowledge base. He also showed how only-knowing can be used to formalize non-monotonic behavior within a monotonic logic. Despite its appeal, all attempts to extend only-knowing to the many agent case have undesirable properties. A belief model by Halpern and Lakemeyer, for instance, appeals to proof-theoretic constructs in the semantics and needs to axiomatize validity as part of the logic. It is also not clear how to generalize their ideas to a first-order case. In this paper, we propose a new account of multi-agent only-knowing which, for the first time, has a natural possible-world semantics for a quantified language with equality. We then provide, for the propositional fragment, a sound and complete axiomatization that faithfully lifts Levesque's proof theory to the many agent case. We also discuss comparisons to the earlier approach by Halpern and Lakemeyer.Comment: Appears in Principles of Knowledge Representation and Reasoning 201

    Learnability with PAC Semantics for Multi-agent Beliefs

    Full text link
    The tension between deduction and induction is perhaps the most fundamental issue in areas such as philosophy, cognition and artificial intelligence. In an influential paper, Valiant recognised that the challenge of learning should be integrated with deduction. In particular, he proposed a semantics to capture the quality possessed by the output of Probably Approximately Correct (PAC) learning algorithms when formulated in a logic. Although weaker than classical entailment, it allows for a powerful model-theoretic framework for answering queries. In this paper, we provide a new technical foundation to demonstrate PAC learning with multi-agent epistemic logics. To circumvent the negative results in the literature on the difficulty of robust learning with the PAC semantics, we consider so-called implicit learning where we are able to incorporate observations to the background theory in service of deciding the entailment of an epistemic query. We prove correctness of the learning procedure and discuss results on the sample complexity, that is how many observations we will need to provably assert that the query is entailed given a user-specified error bound. Finally, we investigate under what circumstances this algorithm can be made efficient. On the last point, given that reasoning in epistemic logics especially in multi-agent epistemic logics is PSPACE-complete, it might seem like there is no hope for this problem. We leverage some recent results on the so-called Representation Theorem explored for single-agent and multi-agent epistemic logics with the only knowing operator to reduce modal reasoning to propositional reasoning

    Reasoning about Imperfect Information Games in the Epistemic Situation Calculus

    Get PDF
    Approaches to reasoning about knowledge in imperfect information games typically involve an exhaustive description of the game, the dynamics characterized by a tree and the incompleteness in knowledge by information sets. Such specifications depend on a modeler's intuition, are tedious to draft and vague on where the knowledge comes from. Also, formalisms proposed so far are essentially propositional, which, at the very least, makes them cumbersome to use in realistic scenarios. In this paper, we propose to model imperfect information games in a new multi-agent epistemic variant of the situation calculus. By using the concept of only-knowing, the beliefs and non-beliefs of players after any sequence of actions, sensing or otherwise, can be characterized as entailments in this logic. We show how de re vs. de dicto belief distinctions come about in the framework. We also obtain a regression theorem for multi-agent beliefs, which reduces reasoning about beliefs after actions to reasoning about beliefs in the initial situation

    A First-Order Logic of Probability and Only Knowing in Unbounded Domains

    Get PDF
    Only knowing captures the intuitive notion that the beliefs of an agent are precisely those that follow from its knowledge base. It has previously been shown to be useful in characterizing knowledge-based reasoners, especially in a quantified setting. While this allows us to reason about incomplete knowledge in the sense of not knowing whether a formula is true or not, there are many applications where one would like to reason about the degree of belief in a formula. In this work, we propose a new general first-order account of probability and only knowing that admits knowledge bases with incomplete and probabilistic specifications. Beliefs and non-beliefs are then shown to emerge as a direct logical consequence of the sentences of the knowledge base at a corresponding level of specificity

    On the implicit learnability of knowledge

    Get PDF
    The deployment of knowledge-based systems in the real world requires addressing the challenge of knowledge acquisition. While knowledge engineering by hand is a daunting task, machine learning has been proposed as an alternative. However, learning explicit representations for real-world knowledge that feature a desirable level of expressiveness remains difficult and often leads to heuristics without robustness guarantees. Probably Approximately Correct (PAC) Semantics offers strong guarantees, however learning explicit representations is not tractable, even in propositional logic. Previous works have proposed solutions to these challenges by learning to reason directly, without producing an explicit representation of the learned knowledge. Recent work on so-called implicit learning has shown tremendous promise in obtaining polynomial-time results for fragments of first-order logic, bypassing the intractable step of producing an explicit representation of learned knowledge. This thesis extends these ideas to richer logical languages such as arithmetic theories and multi-agent logics. We demonstrate that it is possible to learn to reason efficiently for standard fragments of linear arithmetic, and we establish a general finding that provides an efficient reduction from the learning-to-reason problem for any logic to any sound and complete solver for that logic. We then extend implicit learning in PAC Semantics to handle noisy data in the form of intervals and threshold uncertainty in the language of linear arithmetic. We prove that our extended framework maintains existing polynomial-time complexity guarantees. Furthermore, we provide the first empirical investigation of this purely theoretical framework. Using benchmark problems, we show that our implicit approach to learning optimal linear programming objective constraints significantly outperforms an explicit approach in practice. Our results demonstrate the effectiveness of PAC Semantics and implicit learning for real-world problems with noisy data and provide a path towards robust learning in expressive languages. Development in reasoning about knowledge and interactions in complex multi-agent systems spans domains such as artificial intelligence, smart traffic, and robotics. In these systems, epistemic logic serves as a formal language for expressing and reasoning about knowledge, beliefs, and communication among agents, yet integrating learning algorithms within multi-agent epistemic logic is challenging due to the inherent complexity of distributed knowledge reasoning. We provide proof of correctness for our learning procedure and analyse the sample complexity required to assert the entailment of an epistemic query. Overall, our work offers a promising approach to integrating learning and deduction in a range of logical languages from linear arithmetic to multi-agent epistemic logics
    corecore