620 research outputs found

    Variations on a Theme: A Bibliography on Approaches to Theorem Proving Inspired From Satchmo

    Get PDF
    This articles is a structured bibliography on theorem provers, approaches to theorem proving, and theorem proving applications inspired from Satchmo, the model generation theorem prover developed in the mid 80es of the 20th century at ECRC, the European Computer- Industry Research Centre. Note that the bibliography given in this article is not exhaustive

    Connectionist Inference Models

    Get PDF
    The performance of symbolic inference tasks has long been a challenge to connectionists. In this paper, we present an extended survey of this area. Existing connectionist inference systems are reviewed, with particular reference to how they perform variable binding and rule-based reasoning, and whether they involve distributed or localist representations. The benefits and disadvantages of different representations and systems are outlined, and conclusions drawn regarding the capabilities of connectionist inference systems when compared with symbolic inference systems or when used for cognitive modeling

    A theory of resolution

    No full text
    We review the fundamental resolution-based methods for first-order theorem proving and present them in a uniform framework. We show that these calculi can be viewed as specializations of non-clausal resolution with simplification. Simplification techniques are justified with the help of a rather general notion of redundancy for inferences. As simplification and other techniques for the elimination of redundancy are indispensable for an acceptable behaviour of any practical theorem prover this work is the first uniform treatment of resolution-like techniques in which the avoidance of redundant computations attains the attention it deserves. In many cases our presentation of a resolution method will indicate new ways of how to improve the method over what was known previously. We also give answers to several open problems in the area

    Learning Reasoning Strategies in End-to-End Differentiable Proving

    Get PDF
    Attempts to render deep learning models interpretable, data-efficient, and robust have seen some success through hybridisation with rule-based systems, for example, in Neural Theorem Provers (NTPs). These neuro-symbolic models can induce interpretable rules and learn representations from data via back-propagation, while providing logical explanations for their predictions. However, they are restricted by their computational complexity, as they need to consider all possible proof paths for explaining a goal, thus rendering them unfit for large-scale applications. We present Conditional Theorem Provers (CTPs), an extension to NTPs that learns an optimal rule selection strategy via gradient-based optimisation. We show that CTPs are scalable and yield state-of-the-art results on the CLUTRR dataset, which tests systematic generalisation of neural models by learning to reason over smaller graphs and evaluating on larger ones. Finally, CTPs show better link prediction results on standard benchmarks in comparison with other neural-symbolic models, while being explainable. All source code and datasets are available online, at https://github.com/uclnlp/ctp.Comment: Proceedings of the 37th International Conference on Machine Learning (ICML 2020
    • …
    corecore