1,069 research outputs found

    The use of data-mining for the automatic formation of tactics

    Get PDF
    This paper discusses the usse of data-mining for the automatic formation of tactics. It was presented at the Workshop on Computer-Supported Mathematical Theory Development held at IJCAR in 2004. The aim of this project is to evaluate the applicability of data-mining techniques to the automatic formation of tactics from large corpuses of proofs. We data-mine information from large proof corpuses to find commonly occurring patterns. These patterns are then evolved into tactics using genetic programming techniques

    Application of proof plans to computer configuration problems

    Get PDF

    Machine learning and its applications in reliability analysis systems

    Get PDF
    In this thesis, we are interested in exploring some aspects of Machine Learning (ML) and its application in the Reliability Analysis systems (RAs). We begin by investigating some ML paradigms and their- techniques, go on to discuss the possible applications of ML in improving RAs performance, and lastly give guidelines of the architecture of learning RAs. Our survey of ML covers both levels of Neural Network learning and Symbolic learning. In symbolic process learning, five types of learning and their applications are discussed: rote learning, learning from instruction, learning from analogy, learning from examples, and learning from observation and discovery. The Reliability Analysis systems (RAs) presented in this thesis are mainly designed for maintaining plant safety supported by two functions: risk analysis function, i.e., failure mode effect analysis (FMEA) ; and diagnosis function, i.e., real-time fault location (RTFL). Three approaches have been discussed in creating the RAs. According to the result of our survey, we suggest currently the best design of RAs is to embed model-based RAs, i.e., MORA (as software) in a neural network based computer system (as hardware). However, there are still some improvement which can be made through the applications of Machine Learning. By implanting the 'learning element', the MORA will become learning MORA (La MORA) system, a learning Reliability Analysis system with the power of automatic knowledge acquisition and inconsistency checking, and more. To conclude our thesis, we propose an architecture of La MORA

    Knowledge based approach to process engineering design

    Get PDF

    The Automation Of Proof By Mathematical Induction

    Get PDF
    Chapter appears in Handbook of Automated Reasoning Edited by: Alan Robinson and Andrei Voronkov ISBN: 978-0-444-50813-3This paper is a chapter of the Handbook of Automated Reasoning edited by Voronkov and Robinson. It describes techniques for automated reasoning in theories containing rules of mathematical induction. Firstly, inductive reasoning is defined and its importance fore reasoning about any form of repitition is stressed. Then the special search problems that arise in inductive theories are explained followed by descriptions of the heuristic methods that have been devised to solve these problems

    Using middle-out reasoning to guide inductive theorem proving

    Get PDF

    Implementation methodology for using concurrent and collaborative approaches for theorem provers, with case studies of SAT and LCF style provers

    Get PDF
    Theorem provers are faced with the challenges of size and complexity, fueled by the increasing range of applications. The use of concurrent/ distributed programming paradigms to engineer better theorem provers merits serious investigation, as it provides: more processing power and opportunities for implementing novel approaches to address theorem proving tasks hitherto infeasible in a sequential setting. Investigation of these opportunities for two diverse theorem prover settings with an emphasis on desirable implementation criteria is the core focus of this thesis. Concurrent programming is notoriously error prone, hard to debug and evaluate. Thus, implementation approaches which promote easy prototyping, portability, incremental development and effective isolation of design and implementation can greatly aid the enterprise of experimentation with the application of concurrent techniques to address specific theorem proving tasks. In this thesis, we have explored one such approach by using Alice ML, a functional programming language with support for concurrency and distribution, to implement the prototypes and have used programming abstractions to encapsulate the implementations of the concurrent techniques used. The utility of this approach is illustrated via proof-of-concept prototypes of concurrent systems for two diverse case studies of theorem proving: the propositional satisfiability problem (SAT) and LCF style (first-order) theorem proving, addressing some previously unexplored parallelisation opportunities for each, as follows:. SAT: We have developed a novel hybrid approach for SAT and implemented a prototype for the same: DPLL-Stalmarck. It uses two complementary algorithms for SAT, DPLL and Stalmarck’s. The two solvers run asynchronously and dynamic information exchange is used for co-operative solving. Interaction of the solvers has been encapsulated as a programming abstraction. Compared to the standalone DPLL solver, DPLL-Stalmarck shows significant performance gains for two of the three problem classes considered and comparable behaviour otherwise. As an exploratory research effort, we have developed a novel algorithm, Concurrent Stalmarck, by applying concurrent techniques to the Stalmarck algorithm. A proof-of-concept prototype for the same has been implemented. Implementation of the saturation technique of the Stalmarck algorithm in a parallel setting, as implemented in Concurrent Stalmarck, has been encapsulated as a programming abstraction. LCF: Provision of programmable concurrent primitives enables customisation of concurrent techniques to specific theorem proving scenarios. In this case study, we have developed a multilayered approach to support programmable, sound extensions for an LCF prover: use programming abstractions to implement the concurrent techniques; use these to develop novel tacticals (control structures to apply tactics), incorporating concurrent techniques; and use these to develop novel proof search procedures. This approach has been implemented in a prototypical LCF style first-order prover, using Alice ML. New tacticals developed are: fastest-first; distributed composition; crossTalk: a novel tactic which uses dynamic, collaborative information exchange to handle unification across multiple sub-goals, with shared meta-variables; a new tactic, performing simultaneous proof-refutation attempts on propositional (sub- )goals, by invoking an external SAT solver (SAT case study), as a counter-example finder. Examples of concrete theorem proving scenarios are provided, demonstrating the utility of these extensions. Synthesis of a variety of automatic proof search procedures has been demonstrated, illustrating the scope of programmability and customisation, enabled by our multilayered approach

    Augmented Language Models: a Survey

    Full text link
    This survey reviews works in which language models (LMs) are augmented with reasoning skills and the ability to use tools. The former is defined as decomposing a potentially complex task into simpler subtasks while the latter consists in calling external modules such as a code interpreter. LMs can leverage these augmentations separately or in combination via heuristics, or learn to do so from demonstrations. While adhering to a standard missing tokens prediction objective, such augmented LMs can use various, possibly non-parametric external modules to expand their context processing ability, thus departing from the pure language modeling paradigm. We therefore refer to them as Augmented Language Models (ALMs). The missing token objective allows ALMs to learn to reason, use tools, and even act, while still performing standard natural language tasks and even outperforming most regular LMs on several benchmarks. In this work, after reviewing current advance in ALMs, we conclude that this new research direction has the potential to address common limitations of traditional LMs such as interpretability, consistency, and scalability issues
    corecore