12 research outputs found

    ENIGMA: Efficient Learning-based Inference Guiding Machine

    Full text link
    ENIGMA is a learning-based method for guiding given clause selection in saturation-based theorem provers. Clauses from many proof searches are classified as positive and negative based on their participation in the proofs. An efficient classification model is trained on this data, using fast feature-based characterization of the clauses . The learned model is then tightly linked with the core prover and used as a basis of a new parameterized evaluation heuristic that provides fast ranking of all generated clauses. The approach is evaluated on the E prover and the CASC 2016 AIM benchmark, showing a large increase of E's performance.Comment: Submitted to LPAR 201

    Graph Sequence Learning for Premise Selection

    Full text link
    Premise selection is crucial for large theory reasoning as the sheer size of the problems quickly leads to resource starvation. This paper proposes a premise selection approach inspired by the domain of image captioning, where language models automatically generate a suitable caption for a given image. Likewise, we attempt to generate the sequence of axioms required to construct the proof of a given problem. This is achieved by combining a pre-trained graph neural network with a language model. We evaluated different configurations of our method and experience a 17.7% improvement gain over the baseline.Comment: 17 page

    MizAR 60 for Mizar 50

    Get PDF
    As a present to Mizar on its 50th anniversary, we develop an AI/TP system that automatically proves about 60% of the Mizar theorems in the hammer setting. We also automatically prove 75% of the Mizar theorems when the automated provers are helped by using only the premises used in the human-written Mizar proofs. We describe the methods and large-scale experiments leading to these results. This includes in particular the E and Vampire provers, their ENIGMA and Deepire learning modifications, a number of learning-based premise selection methods, and the incremental loop that interleaves growing a corpus of millions of ATP proofs with training increasingly strong AI/TP systems on them. We also present a selection of Mizar problems that were proved automatically

    Automated Theorem Proving for Metamath

    Get PDF

    ProofWatch: Watchlist Guidance for Large Theories in E

    Full text link
    Watchlist (also hint list) is a mechanism that allows related proofs to guide a proof search for a new conjecture. This mechanism has been used with the Otter and Prover9 theorem provers, both for interactive formalizations and for human-assisted proving of open conjectures in small theories. In this work we explore the use of watchlists in large theories coming from first-order translations of large ITP libraries, aiming at improving hammer-style automation by smarter internal guidance of the ATP systems. In particular, we (i) design watchlist-based clause evaluation heuristics inside the E ATP system, and (ii) develop new proof guiding algorithms that load many previous proofs inside the ATP and focus the proof search using a dynamically updated notion of proof matching. The methods are evaluated on a large set of problems coming from the Mizar library, showing significant improvement of E's standard portfolio of strategies, and also of the previous best set of strategies invented for Mizar by evolutionary methods.Comment: 19 pages, 10 tables, submitted to ITP 2018 at FLO

    Cascade Optimisation of Battery Electric Vehicle Powertrains

    Get PDF
    Motivated by challenges in the motor manufacturing industry, a solution to reduce computation time and improve minimisation performance in the context of optimisation of battery electric vehicle powertrain is presented. We propose a cascade optimisation method that takes advantage of two different vehicle models: the proprietary YASA MATLAB® vehicle model and a Python machine learning-based vehicle model derived from the proprietary model. Gearbox type, powertrain configuration and motor parameters are included as input variables to the objective function explored in this work while constraints related to acceleration time and top speed must be met. The combination of these two models in a constrained optimisation genetic algorithm managed to both reduce the amount of computation time required and achieve more optimal target values relating to minimising vehicle total cost than either the proprietary or machine learning model alone. The coarse-to-fine approach utilised in the cascade optimisation was proven to be mainly responsible for the improved optimisation result. By using the final population of the machine learning vehicle model optimisation as the initial population of the following simulation-based minimisation, the initial time-consuming search to produce a population satisfying all domain constraints was practically eliminated. The obtained results showed that the cascade optimisation was able to reduce the computation time by 53% and still achieve a minimisation value 14% lower when compared to the YASA Vehicle Model Optimisation

    Parameterised bounds on the sum of variables in time-series constraints

    Get PDF
    For two families of time-series constraints with the aggregator Sum and features one and width, we provide parameterised sharp lower and upper bounds on the sum of the time-series variables wrt these families of constraints. This is important in many applications, as this sum represents the cost, for example the energy used, or the manpower effort expended. We use these bounds not only to gain a priori knowledge of the overall cost of a problem, we can also use them on increasing prefixes and suffixes of the variables to avoid infeasible partial assignments under a given cost budget. Experiments show that the bounds drastically reduce the effort to find cost limited solutions
    corecore