285 research outputs found

    A Deep Reinforcement Learning Approach to First-Order Logic Theorem Proving

    Full text link
    Automated theorem provers have traditionally relied on manually tuned heuristics to guide how they perform proof search. Deep reinforcement learning has been proposed as a way to obviate the need for such heuristics, however, its deployment in automated theorem proving remains a challenge. In this paper we introduce TRAIL, a system that applies deep reinforcement learning to saturation-based theorem proving. TRAIL leverages (a) a novel neural representation of the state of a theorem prover and (b) a novel characterization of the inference selection process in terms of an attention-based action policy. We show through systematic analysis that these mechanisms allow TRAIL to significantly outperform previous reinforcement-learning-based theorem provers on two benchmark datasets for first-order logic automated theorem proving (proving around 15% more theorems)

    Learning to Prove Theorems by Learning to Generate Theorems

    Full text link
    We consider the task of automated theorem proving, a key AI task. Deep learning has shown promise for training theorem provers, but there are limited human-written theorems and proofs available for supervised learning. To address this limitation, we propose to learn a neural generator that automatically synthesizes theorems and proofs for the purpose of training a theorem prover. Experiments on real-world tasks demonstrate that synthetic data from our approach improves the theorem prover and advances the state of the art of automated theorem proving in Metamath. Code is available at https://github.com/princeton-vl/MetaGen

    Learning to Find Proofs and Theorems by Learning to Refine Search Strategies: The Case of Loop Invariant Synthesis

    Full text link
    We propose a new approach to automated theorem proving where an AlphaZero-style agent is self-training to refine a generic high-level expert strategy expressed as a nondeterministic program. An analogous teacher agent is self-training to generate tasks of suitable relevance and difficulty for the learner. This allows leveraging minimal amounts of domain knowledge to tackle problems for which training data is unavailable or hard to synthesize. As a specific illustration, we consider loop invariant synthesis for imperative programs and use neural networks to refine both the teacher and solver strategies

    Conjectures, tests and proofs: An overview of theory exploration

    Get PDF
    A key component of mathematical reasoning is the ability to formulate interesting conjectures about a problem domain at hand. In this paper, we give a brief overview of a theory exploration system called QuickSpec, which is able to automatically discover interesting conjectures about a given set of functions. QuickSpec works by interleaving term generation with random testing to form candidate conjectures. This is made tractable by starting from small sizes and ensuring that only terms that are irreducible with respect to already discovered conjectures are considered. QuickSpec has been successfully applied to generate lemmas for automated inductive theorem proving as well as to generate specifications of functional programs. We give an overview of typical use-cases of QuickSpec, as well as demonstrating how to easily connect it to a theorem prover of the user’s choice

    ML + FV = ♡\heartsuit? A Survey on the Application of Machine Learning to Formal Verification

    Get PDF
    Formal Verification (FV) and Machine Learning (ML) can seem incompatible due to their opposite mathematical foundations and their use in real-life problems: FV mostly relies on discrete mathematics and aims at ensuring correctness; ML often relies on probabilistic models and consists of learning patterns from training data. In this paper, we postulate that they are complementary in practice, and explore how ML helps FV in its classical approaches: static analysis, model-checking, theorem-proving, and SAT solving. We draw a landscape of the current practice and catalog some of the most prominent uses of ML inside FV tools, thus offering a new perspective on FV techniques that can help researchers and practitioners to better locate the possible synergies. We discuss lessons learned from our work, point to possible improvements and offer visions for the future of the domain in the light of the science of software and systems modeling.Comment: 13 pages, no figures, 3 table

    Emancipating Agents: Need Schrödinger’s Cat be let into the Chinese Room?

    Get PDF
    • …
    corecore