791 research outputs found

    Implementing Theorem Provers in Logic Programming

    Get PDF
    Logic programming languages have many characteristics that indicate that they should serve as good implementation languages for theorem provers. For example, they are based on search and unification which are also fundamental to theorem proving. We show how an extended logic programming language can be used to implement theorem provers and other aspects of proof systems for a variety of logics. In this language first-order terms are replaced with simply-typed λ-terms, and thus unification becomes higher-order unification. Also, implication and universal quantification are allowed in goals. We illustrate that inference rules can be very naturally specified, and that the primitive search operations of this language correspond to those needed for searching for proofs. We argue on several levels that this extended logic programming language provides a very suitable environment for implementing tactic style theorem provers. Such theorem provers provide extensive capabilities for integrating techniques for automated theorem proving into an interactive proof environment. We are also concerned with representing proofs as objects. We illustrate how such objects can be constructed and manipulated in the logic programming setting. Finally, we propose extensions to tactic style theorem provers in working toward the goal of developing an interactive theorem proving environment that provides a user with many tools and techniques for building and manipulating proofs, and that integrates sophisticated capabilities for automated proof discovery. Many of the theorem provers we present have been implemented in the higher-order logic programming language λProlog

    Scalable Logic Defined Static Analysis

    Get PDF
    Logic languages such as Datalog have been proposed as a method for specifying flexible and customisable static analysers. Using Datalog, various classes of static analyses can be expressed precisely and succinctly, requiring fewer lines of code than hand-crafted analysers. In this paradigm, a static analysis specification is encoded by a set of declarative logic rules and an o -the-shelf solver is used to compute the result of the static analysis. Unfortunately, when large-scale analyses are employed, Datalog-based tools currently fail to scale in comparison to hand-crafted static analysers. As a result, Datalog-based analysers have largely remained an academic curiosity, rather than industrially respectful tools. This thesis outlines our e orts in understanding the sources of performance limitations in Datalog-based tools. We propose a novel evaluation technique that is predicated on the fact that in the case of static analysis, the logical specification is a design time artefact and hence does not change during evaluation. Thus, instead of directly evaluating Datalog rules, our approach leverages partial evaluation to synthesise a specialised static analyser from these rules. This approach enables a novel indexing optimisations that automatically selects an optimal set of indexes to speedup and minimise memory usage in the Datalog computation. Lastly, we explore the case of more expressive logics, namely, constrained Horn clause and their use in proving the correctness of programs. We identify a bottleneck in various symbolic evaluation algorithms that centre around Craig interpolation. We propose a method of improving these evaluation algorithms by a proposing a method of guiding theorem provers to discover relevant interpolants with respect to the input logic specification. The culmination of our work is implemented in a general-purpose and highperformance tool called Souffl´e. We describe Souffl´e and evaluate its performance experimentally, showing significant improvement over alternative techniques and its scalability in real-world industrial use cases

    Proceedings of the Automated Reasoning Workshop (ARW 2019)

    Get PDF
    Preface This volume contains the proceedings of ARW 2019, the twenty sixths Workshop on Automated Rea- soning (2nd{3d September 2019) hosted by the Department of Computer Science, Middlesex University, England (UK). Traditionally, this annual workshop which brings together, for a two-day intensive pro- gramme, researchers from different areas of automated reasoning, covers both traditional and emerging topics, disseminates achieved results or work in progress. During informal discussions at workshop ses- sions, the attendees, whether they are established in the Automated Reasoning community or are only at their early stages of their research career, gain invaluable feedback from colleagues. ARW always looks at the ways of strengthening links between academia, industry and government; between theoretical and practical advances. The 26th ARW is affiliated with TABLEAUX 2019 conference. These proceedings contain forteen extended abstracts contributed by the participants of the workshop and assembled in order of their presentations at the workshop. The abstracts cover a wide range of topics including the development of reasoning techniques for Agents, Model-Checking, Proof Search for classical and non-classical logics, Description Logics, development of Intelligent Prediction Models, application of Machine Learning to theorem proving, applications of AR in Cloud Computing and Networking. I would like to thank the members of the ARW Organising Committee for their advice and assis- tance. I would also like to thank the organisers of TABLEAUX/FroCoS 2019, and Andrei Popescu, the TABLEAUX Conference Chair, in particular, for the enormous work related to the organisation of this affiliation. I would also like to thank Natalia Yerashenia for helping in preparing these proceedings. London Alexander Bolotov September 201

    Controlled and effective interpolation

    Get PDF
    Model checking is a well established technique to verify systems, exhaustively and automatically. The state space explosion, known as the main difficulty in model checking scalability, has been successfully approached by symbolic model checking which represents programs using logic, usually at the propositional or first order theories level. Craig interpolation is one of the most successful abstraction techniques used in symbolic methods. Interpolants can be efficiently generated from proofs of unsatisfiability, and have been used as means of over-approximation to generate inductive invariants, refinement predicates, and function summaries. However, interpolation is still not fully understood. For several theories it is only possible to generate one interpolant, giving the interpolation-based application no chance of further optimization via interpolation. For the theories that have interpolation systems that are able to generate different interpolants, it is not understood what makes one interpolant better than another, and how to generate the most suitable ones for a particular verification task. The goal of this thesis is to address the problems of how to generate multiple interpolants for theories that still lack this flexibility in their interpolation algorithms, and how to aim at good interpolants. This thesis extends the state-of-the-art by introducing novel interpolation frameworks for different theories. For propositional logic, this work provides a thorough theoretical analysis showing which properties are desirable in a labeling function for the Labeled Interpolation Systems framework (LIS). The Proof-Sensitive labeling function is presented, and we prove that it generates interpolants with the smallest number of Boolean connectives in the entire LIS framework. Two variants that aim at controlling the logical strength of propositional interpolants while maintaining a small size are given. The new interpolation algorithms are compared to previous ones from the literature in different model checking settings, showing that they consistently lead to a better overall verification performance. The Equalities and Uninterpreted Functions (EUF)-interpolation system, presented in this thesis, is a duality-based interpolation framework capable of generating multiple interpolants for a single proof of unsatisfiability, and provides control over the logical strength of the interpolants it generates using labeling functions. The labeling functions can be theoretically compared with respect to their strength, and we prove that two of them generate the interpolants with the smallest number of equalities. Our experiments follow the theory, showing that the generated interpolants indeed have different logical strength. We combine propositional and EUF interpolation in a model checking setting, and show that the strength of the interpolation algorithms for different theories has to be aligned in order to generate smaller interpolants. This work also introduces the Linear Real Arithmetic (LRA)-interpolation system, an interpolation framework for LRA. The framework is able to generate infinitely many interpolants of different logical strength using the duality of interpolants. The strength of the LRA interpolants can be controlled by a normalized strength factor, which makes it straightforward for an interpolationbased application to choose the level of strength it wants for the interpolants. Our experiments with the LRA-interpolation system and a model checker show that it is very important for the application to be able to fine tune the strength of the LRA interpolants in order to achieve optimal performance. The interpolation frameworks were implemented and form the interpolation module in OpenSMT2, an open source efficient SMT solver. OpenSMT2 has been integrated to the propositional interpolation-based model checkers FunFrog and eVolCheck, and to the first order interpolation-based model checkerHiFrog. This thesis presents real life model checking experiments using the novel interpolation frameworks and the tools aforementioned, showing the viability and strengths of the techniques

    Description of computer animated images

    Get PDF
    Thesis (M.S.V.S.)--Massachusetts Institute of Technology, Dept. of Architecture, 1985.MICROFICHE COPY AVAILABLE IN ARCHIVES AND ROTCH.Includes bibliographical references (leaves 65-70).The problem of specification of temporal transformations for Computer Animation production is investigated. Based on this analysis, an interactive animation language is developed which supports both procedural and key-frame animation. It is a flexible software environment for the design and prototyping of animation programs and interfaces. The language is implemented in C within the UNIX operating system, and consists of C-like expressions, built-in functions, script and track constructs. There is also an escape mechanism to run UNIX commands. C-like expressions are the regular arithmetical, logical and control of flow operations. Built-in functions are C functions incorporated in the language. Scripts are time programs that are executed in parallel to generate animation. Tracks are time variables used to define dynamic animation parameters. A small set of animation tools is also developed to exemplify the system's utilization. These include a three dimensional geometry model interface library, a spline library, and simple mechanics, collision detection and inverse kinematics functions.by Luiz Velho.M.S.V.S

    Towards cooperative software verification with test generation and formal verification

    Get PDF
    There are two major methods for software verification: testing and formal verification. To increase our confidence in software on a large scale, we require tools that apply these methods automatically and reliably. Testing with manually written tests is widespread, but for automatically generated tests for the C programming language there is no standardized format. This makes the use and comparison of automated test generators expensive. In addition, testing can never provide full confidence in software—it can show the presence of bugs, but not their absence. In contrast, formal verification uses established, standardized formats and can prove the absence of bugs. Unfortunately, even successful formal-verification techniques suffer from different weaknesses. Compositions of multiple techniques try to combine the strengths of complementing techniques, but such combinations are often designed as cohesive, monolithic units. This makes them inflexible and it is costly to replace components. To improve on this state of the art, we work towards an off-the-shelf cooperation between verification tools through standardized exchange formats. First, we work towards standardization of automated test generation for C. We increase the comparability of test generators through a common benchmarking framework and reliable tooling, and provide means to reliably compare the bug-finding capabilities of test generators and formal verifiers. Second, we introduce new concepts for the off-the-shelf cooperation between verifiers (both test generators and formal verifiers). We show the flexibility of these concepts through an array of combinations and through an application to incremental verification. We also show how existing, strongly coupled techniques in software verification can be decomposed into stand-alone components that cooperate through clearly defined interfaces and standardized exchange formats. All our work is backed by rigorous implementation of the proposed concepts and thorough experimental evaluations that demonstrate the benefits of our work. Through these means we are able to improve the comparability of automated verifiers, allow the cooperation between a large array of existing verifiers, increase the effectiveness of software verification, and create new opportunities for further research on cooperative verification

    Metalearning-Informed Competence in Children: Implications for Responsible Brain-Inspired Artificial Intelligence

    Full text link
    This paper offers a novel conceptual framework comprising four essential cognitive mechanisms that operate concurrently and collaboratively to enable metalearning (knowledge and regulation of learning) strategy implementation in young children. A roadmap incorporating the core mechanisms and the associated strategies is presented as an explanation of the developing brain's remarkable cross-context learning competence. The tetrad of fundamental complementary processes is chosen to collectively represent the bare-bones metalearning architecture that can be extended to artificial intelligence (AI) systems emulating brain-like learning and problem-solving skills. Utilizing the metalearning-enabled young mind as a model for brain-inspired computing, this work further discusses important implications for morally grounded AI.Comment: 27 pages, 3 figure

    Global Guidance for Local Generalization in Model Checking

    Get PDF
    SMT-based model checkers, especially IC3-style ones, are currently the most effective techniques for verification of infinite state systems. They infer global inductive invariants via local reasoning about a single step of the transition relation of a system, while employing SMT-based procedures, such as interpolation, to mitigate the limitations of local reasoning and allow for better generalization. Unfortunately, these mitigations intertwine model checking with heuristics of the underlying SMT-solver, negatively affecting stability of model checking. In this paper, we propose to tackle the limitations of locality in a systematic manner. We introduce explicit global guidance into the local reasoning performed by IC3-style algorithms. To this end, we extend the SMT-IC3 paradigm with three novel rules, designed to mitigate fundamental sources of failure that stem from locality. We instantiate these rules for the theory of Linear Integer Arithmetic and implement them on top of SPACER solver in Z3. Our empirical results show that GSPACER, SPACER extended with global guidance, is significantly more effective than both SPACER and sole global reasoning, and, furthermore, is insensitive to interpolation.Comment: Published in CAV 202
    • …
    corecore