620 research outputs found
Variations on a Theme: A Bibliography on Approaches to Theorem Proving Inspired From Satchmo
This articles is a structured bibliography on theorem provers,
approaches to theorem proving, and theorem proving applications inspired
from Satchmo, the model generation theorem prover developed
in the mid 80es of the 20th century at ECRC, the European Computer-
Industry Research Centre. Note that the bibliography given in this article
is not exhaustive
Connectionist Inference Models
The performance of symbolic inference tasks has long been a challenge to connectionists. In this paper, we present an extended survey of this area. Existing connectionist inference systems are reviewed, with particular reference to how they perform variable binding and rule-based reasoning, and whether they involve distributed or localist representations. The benefits and disadvantages of different representations and systems are outlined, and conclusions drawn regarding the capabilities of connectionist inference systems when compared with symbolic inference systems or when used for cognitive modeling
Recommended from our members
Automated verification of refinement laws
Demonic refinement algebras are variants of Kleene algebras. Introduced by von Wright as a light-weight variant of the refinement calculus, their intended semantics are positively disjunctive predicate transformers, and their calculus is entirely within first-order equational logic. So, for the first time, off-the-shelf automated theorem proving (ATP) becomes available for refinement proofs. We used ATP to verify a toolkit of basic refinement laws. Based on this toolkit, we then verified two classical complex refinement laws for action systems by ATP: a data refinement law and Back's atomicity refinement law. We also present a refinement law for infinite loops that has been discovered through automated analysis. Our proof experiments not only demonstrate that refinement can effectively be automated, they also compare eleven different ATP systems and suggest that program verification with variants of Kleene algebras yields interesting theorem proving benchmarks. Finally, we apply hypothesis learning techniques that seem indispensable for automating more complex proofs
A theory of resolution
We review the fundamental resolution-based methods for first-order theorem proving and present them in a uniform framework. We show that these calculi can be viewed as specializations of non-clausal resolution with simplification. Simplification techniques are justified with the help of a rather general notion of redundancy for inferences. As simplification and other techniques for the elimination of redundancy are indispensable for an acceptable behaviour of any practical theorem prover this work is the first uniform treatment of resolution-like techniques in which the avoidance of redundant computations attains the attention it deserves. In many cases our presentation of a resolution method will indicate new ways of how to improve the method over what was known previously. We also give answers to several open problems in the area
Learning Reasoning Strategies in End-to-End Differentiable Proving
Attempts to render deep learning models interpretable, data-efficient, and
robust have seen some success through hybridisation with rule-based systems,
for example, in Neural Theorem Provers (NTPs). These neuro-symbolic models can
induce interpretable rules and learn representations from data via
back-propagation, while providing logical explanations for their predictions.
However, they are restricted by their computational complexity, as they need to
consider all possible proof paths for explaining a goal, thus rendering them
unfit for large-scale applications. We present Conditional Theorem Provers
(CTPs), an extension to NTPs that learns an optimal rule selection strategy via
gradient-based optimisation. We show that CTPs are scalable and yield
state-of-the-art results on the CLUTRR dataset, which tests systematic
generalisation of neural models by learning to reason over smaller graphs and
evaluating on larger ones. Finally, CTPs show better link prediction results on
standard benchmarks in comparison with other neural-symbolic models, while
being explainable. All source code and datasets are available online, at
https://github.com/uclnlp/ctp.Comment: Proceedings of the 37th International Conference on Machine Learning
(ICML 2020
- …