1,278 research outputs found

    On the implicit learnability of knowledge

    Get PDF
    The deployment of knowledge-based systems in the real world requires addressing the challenge of knowledge acquisition. While knowledge engineering by hand is a daunting task, machine learning has been proposed as an alternative. However, learning explicit representations for real-world knowledge that feature a desirable level of expressiveness remains difficult and often leads to heuristics without robustness guarantees. Probably Approximately Correct (PAC) Semantics offers strong guarantees, however learning explicit representations is not tractable, even in propositional logic. Previous works have proposed solutions to these challenges by learning to reason directly, without producing an explicit representation of the learned knowledge. Recent work on so-called implicit learning has shown tremendous promise in obtaining polynomial-time results for fragments of first-order logic, bypassing the intractable step of producing an explicit representation of learned knowledge. This thesis extends these ideas to richer logical languages such as arithmetic theories and multi-agent logics. We demonstrate that it is possible to learn to reason efficiently for standard fragments of linear arithmetic, and we establish a general finding that provides an efficient reduction from the learning-to-reason problem for any logic to any sound and complete solver for that logic. We then extend implicit learning in PAC Semantics to handle noisy data in the form of intervals and threshold uncertainty in the language of linear arithmetic. We prove that our extended framework maintains existing polynomial-time complexity guarantees. Furthermore, we provide the first empirical investigation of this purely theoretical framework. Using benchmark problems, we show that our implicit approach to learning optimal linear programming objective constraints significantly outperforms an explicit approach in practice. Our results demonstrate the effectiveness of PAC Semantics and implicit learning for real-world problems with noisy data and provide a path towards robust learning in expressive languages. Development in reasoning about knowledge and interactions in complex multi-agent systems spans domains such as artificial intelligence, smart traffic, and robotics. In these systems, epistemic logic serves as a formal language for expressing and reasoning about knowledge, beliefs, and communication among agents, yet integrating learning algorithms within multi-agent epistemic logic is challenging due to the inherent complexity of distributed knowledge reasoning. We provide proof of correctness for our learning procedure and analyse the sample complexity required to assert the entailment of an epistemic query. Overall, our work offers a promising approach to integrating learning and deduction in a range of logical languages from linear arithmetic to multi-agent epistemic logics

    Learnability with PAC Semantics for Multi-agent Beliefs

    Full text link
    The tension between deduction and induction is perhaps the most fundamental issue in areas such as philosophy, cognition and artificial intelligence. In an influential paper, Valiant recognised that the challenge of learning should be integrated with deduction. In particular, he proposed a semantics to capture the quality possessed by the output of Probably Approximately Correct (PAC) learning algorithms when formulated in a logic. Although weaker than classical entailment, it allows for a powerful model-theoretic framework for answering queries. In this paper, we provide a new technical foundation to demonstrate PAC learning with multi-agent epistemic logics. To circumvent the negative results in the literature on the difficulty of robust learning with the PAC semantics, we consider so-called implicit learning where we are able to incorporate observations to the background theory in service of deciding the entailment of an epistemic query. We prove correctness of the learning procedure and discuss results on the sample complexity, that is how many observations we will need to provably assert that the query is entailed given a user-specified error bound. Finally, we investigate under what circumstances this algorithm can be made efficient. On the last point, given that reasoning in epistemic logics especially in multi-agent epistemic logics is PSPACE-complete, it might seem like there is no hope for this problem. We leverage some recent results on the so-called Representation Theorem explored for single-agent and multi-agent epistemic logics with the only knowing operator to reduce modal reasoning to propositional reasoning

    Query Answering in Probabilistic Data and Knowledge Bases

    Get PDF
    Probabilistic data and knowledge bases are becoming increasingly important in academia and industry. They are continuously extended with new data, powered by modern information extraction tools that associate probabilities with knowledge base facts. The state of the art to store and process such data is founded on probabilistic database systems, which are widely and successfully employed. Beyond all the success stories, however, such systems still lack the fundamental machinery to convey some of the valuable knowledge hidden in them to the end user, which limits their potential applications in practice. In particular, in their classical form, such systems are typically based on strong, unrealistic limitations, such as the closed-world assumption, the closed-domain assumption, the tuple-independence assumption, and the lack of commonsense knowledge. These limitations do not only lead to unwanted consequences, but also put such systems on weak footing in important tasks, querying answering being a very central one. In this thesis, we enhance probabilistic data and knowledge bases with more realistic data models, thereby allowing for better means for querying them. Building on the long endeavor of unifying logic and probability, we develop different rigorous semantics for probabilistic data and knowledge bases, analyze their computational properties and identify sources of (in)tractability and design practical scalable query answering algorithms whenever possible. To achieve this, the current work brings together some recent paradigms from logics, probabilistic inference, and database theory

    Learning Implicitly with Noisy Data in Linear Arithmetic

    Get PDF
    • …
    corecore