481 research outputs found

    Integrating deductive verification and symbolic execution for abstract object creation in dynamic logic

    Get PDF
    We present a fully abstract weakest precondition calculus and its integration with symbolic execution. Our assertion language allows both specifying and verifying properties of objects at the abstraction level of the programming language, abstracting from a specific implementation of object creation. Objects which are not (yet) created never play any role. The corresponding proof theory is discussed and justified formally by soundness theorems. The usage of the assertion language and proof rules is illustrated with an example of a linked list reachability property. All proof rules presented are fully implemented in a version of the KeY verification system for Java programs

    Efficient Iterative Programs with Distributed Data Collections

    Full text link
    Big data programming frameworks have become increasingly important for the development of applications for which performance and scalability are critical. In those complex frameworks, optimizing code by hand is hard and time-consuming, making automated optimization particularly necessary. In order to automate optimization, a prerequisite is to find suitable abstractions to represent programs; for instance, algebras based on monads or monoids to represent distributed data collections. Currently, however, such algebras do not represent recursive programs in a way which allows for analyzing or rewriting them. In this paper, we extend a monoid algebra with a fixpoint operator for representing recursion as a first class citizen and show how it enables new optimizations. Experiments with the Spark platform illustrate performance gains brought by these systematic optimizations.Comment: 36 page

    Cell libraries and verification

    Get PDF
    Digital electronic devices are often implemented using cell libraries to provide the basic logic elements, such as Boolean functions and on-chip memories. To be usable both during the development of chips, which is usually done in a hardware definition language, and for the final layout, which consists of lithographic masks, cells are described in multiple ways. Among these, there are multiple descriptions of the behavior of cells, for example one at the level of hardware definition languages, and another one in terms of transistors that are ultimately produced. Thus, correct functioning of the device depends also on the correctness of the cell library, requiring all views of a cell to correspond with each other. In this thesis, techniques are presented to verify some of these correspondences in cell libraries. First, a technique is presented to check that the functional description in a hardware definition language and the transistor netlist description implement the same behavior. For this purpose, a semantics is defined for the commonly used subset of the hardware definition language Verilog. This semantics is encoded into Boolean equations, which can also be extracted from a transistor netlist. A model checker is then used to prove equivalence of these two descriptions, or to provide a counterexample showing that they are different. Also in basic elements such as cells, there exists non-determinism reflecting internal behavior that cannot be controlled from the outside. It is however desired that such internal behavior does not lead to different externally observable behavior, i.e., to different computation results. This thesis presents a technique to efficiently check, both for hardware definition language descriptions and transistor netlist descriptions, whether non-determinism does have an effect on the observable computation or not. Power consumption of chips has become a very important topic, especially since devices become mobile and therefore are battery powered. Thus, in order to predict and to maximize battery life, the power consumption of cells should be measured and reduced in an efficient way. To achieve these goals, this thesis also takes the power consumption into account when analyzing non-deterministic behavior. Then, on the one hand, behaviors consuming the same amount of power have to be measured only once. On the other hand, functionally equivalent computations can be forced to consume the least amount of power without affecting the externally observable behavior of the cell, for example by introducing appropriate delays. A way to prevent externally observable non-deterministic behavior in practical hardware designs is by adding timing checks. These checks rule out certain input patterns which must not be generated by the environment of a cell. If an input pattern can be found that is not forbidden by any of the timing checks, yet allows non-deterministic behavior, then the cell’s environment is not sufficiently restricted and hence this usually indicates a forgotten timing check. Therefore, the check for non-determinism is extended to also respect these timing checks and to consider only counterexamples that are not ruled out. If such a counterexample can be found, then it gives an indication what timing checks need to be added. Because current hardware designs run at very high speeds, timing analysis of cells has become a very important issue. For this purpose, cell libraries include a description of the delay arcs present in a cell, giving an amount of time it takes for an input change to have propagated to the outputs of a cell. Also for these descriptions, it is desired that they reflect the actual behavior in the cell. On the one hand, a delay arc that never manifests itself may result in a clock frequency that is lower than necessary. On the other hand, a forgotten delay arc can cause the clock frequency being too high, impairing functioning of the final chip. To relate the functional description of a cell with its timing specification, this thesis presents techniques to check whether delay arcs are consistent with the functionality, and which list all possible delay arcs. Computing new output values of a cell given some new input values requires all connections among the transistors in a cell to obtain stable values. Hitherto it was assumed that such a stable situation will always be reached eventually. To actually check this, a wire is abstracted into a sequence of stable values. Using this abstraction, checking whether stable situations are always reached is reduced to analyzing that an infinite sequence of such stable values exists. This is known in the term rewriting literature as productivity, the infinitary equivalent to termination. The final contribution in this thesis are techniques to automatically prove productivity. For this purpose, existing termination proving tools for term rewriting are re-used to benefit from their tremendous strength and their continuous improvements

    Parallelism in declarative languages

    Get PDF
    Imperative programming languages were initially built for uniprocessor systems that evolved out of the Von Neumann machine model. This model of storage oriented computation blocks parallelism and increases the cost of parallel program development and porting. Declarative languages based on mathematical models of computation, seem more suitable for the development of parallel programs. In the first part of this thesis we examine different language families under the declarative paradigm: functional, logic, and constraint languages. Functional languages are based on the abstract model of functions and (lamda)-calculus. They were initially developed for symbolic computation, but today they are commonly used in numerical analysis and many other application areas. Pure lisp is a widely known member of this class. Logic languages are based on first order predicate calculus. Although they were initially developed for theorem proving, fifth generation operating systems are written in them. Most logic languages are descendants or distant relatives of Prolog. Constraint languages are related to logic languages. In a constraint language you define a program object by placing constraints on its structure and its behavior. They were initially used in graphics applications, but today researchers work on using them in parallel computation. Here we will compare and contrast the language classes above, locate advantages and deficiencies, and explain different choices made by language implementors. In the second part of thesis we describe a front end for the CONSUL, a prototype constraint language for programming multiprocessors. The most important features of the front end are compact representation of constraints, type definitions, functional use of relations, and the ability to split programs into multiple files

    Normal forms and normal theories in conditional rewriting

    Full text link
    this is the author’s version of a work that was accepted for publication in Journal of Logical and Algebraic Methods in Programming. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Journal of Logical and Algebraic Methods in Programming vol. 85 (2016) DOI 10.1016/j.jlamp.2015.06.001We present several new concepts and results on conditional term rewriting within the general framework of order-sorted rewrite theories (OSRTs), which support types, subtypes and rewriting modulo axioms, and contains the more restricted framework of conditional term rewriting systems (CTRSs) as a special case. The concepts shed light on several subtle issues about conditional rewriting and conditional termination. We point out that the notions of irreducible term and of normal form, which coincide for unconditional rewriting, have been conflated for conditional rewriting but are in fact totally different notions. Normal form is a stronger concept. We call any rewrite theory where all irreducible terms are normal forms a normal theory. We argue that normality is essential to have good executability and computability properties. Therefore we call all other theories abnormal, freaks of nature to be avoided. The distinction between irreducible terms and normal forms helps in clarifying various notions of strong and weak termination. We show that abnormal theories can be terminating in various, equally abnormal ways; and argue that any computationally meaningful notion of strong or weak conditional termination should be a property of normal theories. In particular we define the notion of a weakly operationally terminating (or weakly normalizing) OSRT, discuss several evaluation mechanisms to compute normal forms in such theories, and investigate general conditions under which the rewriting-based operational semantics and the initial algebra semantics of a confluent, weakly normalizing OSRT coincide thanks to a notion of canonical term algebra. Finally, we investigate appropriate conditions and proof methods to ensure that a rewrite theory is normal; and characterize the stronger property of a rewrite theory being operationally terminating in terms of a natural generalization of the notion of quasidecreasing order. (C) 2015 Elsevier Inc. All rights reserved.We thank the anonymous referees for their constructive criticism and helpful comments. This work has been partially supported by NSF grant CNS 13-19109. Salvador Lucas' research was developed during a sabbatical year at UIUC and was also supported by the EU (FEDER), Spanish MINECO projects TIN2010-21062-C02-02 and TIN 2013-45732-C4-1-P, and GV grant BEST/2014/026 and project PROMETEO/2011/052.Lucas Alba, S.; Meseguer, J. (2016). Normal forms and normal theories in conditional rewriting. Journal of Logical and Algebraic Methods in Programming. 85(1):67-97. https://doi.org/10.1016/j.jlamp.2015.06.001S679785

    HERMIT: Mechanized Reasoning during Compilation in the Glasgow Haskell Compiler

    Get PDF
    It is difficult to write programs which are both correct and fast. A promising approach, functional programming, is based on the idea of using pure, mathematical functions to construct programs. With effort, it is possible to establish a connection between a specification written in a functional language, which has been proven correct, and a fast implementation, via program transformation. When practiced in the functional programming community, this style of reasoning is still typically performed by hand, by either modifying the source code or using pen-and-paper. Unfortunately, performing such semi-formal reasoning by directly modifying the source code often obfuscates the program, and pen-and-paper reasoning becomes outdated as the program changes over time. Even so, this semi-formal reasoning prevails because formal reasoning is time-consuming, and requires considerable expertise. Formal reasoning tools often only work for a subset of the target language, or require programs to be implemented in a custom language for reasoning. This dissertation investigates a solution, called HERMIT, which mechanizes reasoning during compilation. HERMIT can be used to prove properties about programs written in the Haskell functional programming language, or transform them to improve their performance. Reasoning in HERMIT proceeds in a style familiar to practitioners of pen-and-paper reasoning, and mechanization allows these techniques to be applied to real-world programs with greater confidence. HERMIT can also re-check recorded reasoning steps on subsequent compilations, enforcing a connection with the program as the program is developed. HERMIT is the first system capable of directly reasoning about the full Haskell language. The design and implementation of HERMIT, motivated both by typical reasoning tasks and HERMIT's place in the Haskell ecosystem, is presented in detail. Three case studies investigate HERMIT's capability to reason in practice. These case studies demonstrate that semi-formal reasoning with HERMIT lowers the barrier to writing programs which are both correct and fast

    A parallel functional language compiler for message-passing multicomputers

    Get PDF
    The research presented in this thesis is about the design and implementation of Naira, a parallel, parallelising compiler for a rich, purely functional programming language. The source language of the compiler is a subset of Haskell 1.2. The front end of Naira is written entirely in the Haskell subset being compiled. Naira has been successfully parallelised and it is the largest successfully parallelised Haskell program having achieved good absolute speedups on a network of SUN workstations. Having the same basic structure as other production compilers of functional languages, Naira's parallelisation technology should carry forward to other functional language compilers. The back end of Naira is written in C and generates parallel code in the C language which is envisioned to be run on distributed-memory machines. The code generator is based on a novel compilation scheme specified using a restricted form of Milner's 7r-calculus which achieves asynchronous communication. We present the first working implementation of this scheme on distributed-memory message-passing multicomputers with split-phase transactions. Simulated assessment of the generated parallel code indicates good parallel behaviour. Parallelism is introduced using explicit, advisory user annotations in the source' program and there are two major aspects of the use of annotations in the compiler. First, the front end of the compiler is parallelised so as to improve its efficiency at compilation time when it is compiling input programs. Secondly, the input programs to the compiler can themselves contain annotations based on which the compiler generates the multi-threaded parallel code. These, therefore, make Naira, unusually and uniquely, both a parallel and a parallelising compiler. We adopt a medium-grained approach to granularity where function applications form the unit of parallelism and load distribution. We have experimented with two different task distribution strategies, deterministic and random, and have also experimented with thread-based and quantum- based scheduling policies. Our experiments show that there is little efficiency difference for regular programs but the quantum-based scheduler is the best in programs with irregular parallelism. The compiler has been successfully built, parallelised and assessed using both idealised and realistic measurement tools: we obtained significant compilation speed-ups on a variety of simulated parallel architectures. The simulated results are supported by the best results obtained on real hardware for such a large program: we measured an absolute speedup of 2.5 on a network of 5 SUN workstations. The compiler has also been shown to have good parallelising potential, based on popular test programs. Results of assessing Naira's generated unoptimised parallel code are comparable to those produced by other successful parallel implementation projects

    Abstract Machine for a Comonadic Dataflow Language

    Get PDF
    The formal semantics of higher-order functional dataflow language programs can be represented with the concepts of arrows and comonads from category theory. Both of these methods convey the meaning of programs, but not the operational behaviour of them. In order to understand the operational behaviour of dataflow programs we will derive an abstract machine from an interpreter that is equivalent to a comonadic denotational semantics of a higher-order call-by-name dataflow language. The resulting abstract machine is identical to the well known abstract machine by Krivine with the exception of an overloaded notion of the environment and two additional transition rules for evaluating constructs specific to the dataflow language. The main result of this thesis is that the operational behaviour of call-by-name dataflow language programs is identical to the operational behaviour of regular non-strict languages
    corecore