28,461 research outputs found

    On Decidable Growth-Rate Properties of Imperative Programs

    Full text link
    In 2008, Ben-Amram, Jones and Kristiansen showed that for a simple "core" programming language - an imperative language with bounded loops, and arithmetics limited to addition and multiplication - it was possible to decide precisely whether a program had certain growth-rate properties, namely polynomial (or linear) bounds on computed values, or on the running time. This work emphasized the role of the core language in mitigating the notorious undecidability of program properties, so that one deals with decidable problems. A natural and intriguing problem was whether more elements can be added to the core language, improving its utility, while keeping the growth-rate properties decidable. In particular, the method presented could not handle a command that resets a variable to zero. This paper shows how to handle resets. The analysis is given in a logical style (proof rules), and its complexity is shown to be PSPACE-complete (in contrast, without resets, the problem was PTIME). The analysis algorithm evolved from the previous solution in an interesting way: focus was shifted from proving a bound to disproving it, and the algorithm works top-down rather than bottom-up

    A Simple and Scalable Static Analysis for Bound Analysis and Amortized Complexity Analysis

    Full text link
    We present the first scalable bound analysis that achieves amortized complexity analysis. In contrast to earlier work, our bound analysis is not based on general purpose reasoners such as abstract interpreters, software model checkers or computer algebra tools. Rather, we derive bounds directly from abstract program models, which we obtain from programs by comparatively simple invariant generation and symbolic execution techniques. As a result, we obtain an analysis that is more predictable and more scalable than earlier approaches. Our experiments demonstrate that our analysis is fast and at the same time able to compute bounds for challenging loops in a large real-world benchmark. Technically, our approach is based on lossy vector addition systems (VASS). Our bound analysis first computes a lexicographic ranking function that proves the termination of a VASS, and then derives a bound from this ranking function. Our methodology achieves amortized analysis based on a new insight how lexicographic ranking functions can be used for bound analysis

    Optimizing Abstract Abstract Machines

    Full text link
    The technique of abstracting abstract machines (AAM) provides a systematic approach for deriving computable approximations of evaluators that are easily proved sound. This article contributes a complementary step-by-step process for subsequently going from a naive analyzer derived under the AAM approach, to an efficient and correct implementation. The end result of the process is a two to three order-of-magnitude improvement over the systematically derived analyzer, making it competitive with hand-optimized implementations that compute fundamentally less precise results.Comment: Proceedings of the International Conference on Functional Programming 2013 (ICFP 2013). Boston, Massachusetts. September, 201

    Common Subexpression Elimination in a Lazy Functional Language

    Get PDF
    Common subexpression elimination is a well-known compiler optimisation that saves time by avoiding the repetition of the same computation. To our knowledge it has not yet been applied to lazy functional programming languages, although there are several advantages. First, the referential transparency of these languages makes the identification of common subexpressions very simple. Second, more common subexpressions can be recognised because they can be of arbitrary type whereas standard common subexpression elimination only shares primitive values. However, because lazy functional languages decouple program structure from data space allocation and control flow, analysing its effects and deciding under which conditions the elimination of a common subexpression is beneficial proves to be quite difficult. We developed and implemented the transformation for the language Haskell by extending the Glasgow Haskell compiler and measured its effectiveness on real-world programs

    On Role Logic

    Full text link
    We present role logic, a notation for describing properties of relational structures in shape analysis, databases, and knowledge bases. We construct role logic using the ideas of de Bruijn's notation for lambda calculus, an encoding of first-order logic in lambda calculus, and a simple rule for implicit arguments of unary and binary predicates. The unrestricted version of role logic has the expressive power of first-order logic with transitive closure. Using a syntactic restriction on role logic formulas, we identify a natural fragment RL^2 of role logic. We show that the RL^2 fragment has the same expressive power as two-variable logic with counting C^2 and is therefore decidable. We present a translation of an imperative language into the decidable fragment RL^2, which allows compositional verification of programs that manipulate relational structures. In addition, we show how RL^2 encodes boolean shape analysis constraints and an expressive description logic.Comment: 20 pages. Our later SAS 2004 result builds on this wor

    Enhancing Predicate Pairing with Abstraction for Relational Verification

    Full text link
    Relational verification is a technique that aims at proving properties that relate two different program fragments, or two different program runs. It has been shown that constrained Horn clauses (CHCs) can effectively be used for relational verification by applying a CHC transformation, called predicate pairing, which allows the CHC solver to infer relations among arguments of different predicates. In this paper we study how the effects of the predicate pairing transformation can be enhanced by using various abstract domains based on linear arithmetic (i.e., the domain of convex polyhedra and some of its subdomains) during the transformation. After presenting an algorithm for predicate pairing with abstraction, we report on the experiments we have performed on over a hundred relational verification problems by using various abstract domains. The experiments have been performed by using the VeriMAP transformation and verification system, together with the Parma Polyhedra Library (PPL) and the Z3 solver for CHCs.Comment: Pre-proceedings paper presented at the 27th International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2017), Namur, Belgium, 10-12 October 2017 (arXiv:1708.07854

    Tight polynomial worst-case bounds for loop programs

    Get PDF
    In 2008, Ben-Amram, Jones and Kristiansen showed that for a simple programming language - representing non-deterministic imperative programs with bounded loops, and arithmetics limited to addition and multiplication - it is possible to decide precisely whether a program has certain growth-rate properties, in particular whether a computed value, or the program's running time, has a polynomial growth rate. A natural and intriguing problem was to move from answering the decision problem to giving a quantitative result, namely, a tight polynomial upper bound. This paper shows how to obtain asymptotically-tight, multivariate, disjunctive polynomial bounds for this class of programs. This is a complete solution: whenever a polynomial bound exists it will be found. A pleasant surprise is that the algorithm is quite simple; but it relies on some subtle reasoning. An important ingredient in the proof is the forest factorization theorem, a strong structural result on homomorphisms into a finite monoid

    Enabling Operator Reordering in Data Flow Programs Through Static Code Analysis

    Full text link
    In many massively parallel data management platforms, programs are represented as small imperative pieces of code connected in a data flow. This popular abstraction makes it hard to apply algebraic reordering techniques employed by relational DBMSs and other systems that use an algebraic programming abstraction. We present a code analysis technique based on reverse data and control flow analysis that discovers a set of properties from user code, which can be used to emulate algebraic optimizations in this setting.Comment: 4 pages, accepted and presented at the First International Workshop on Cross-model Language Design and Implementation (XLDI), affiliated with ICFP 2012, Copenhage

    Variable elimination for building interpreters

    Get PDF
    In this paper, we build an interpreter by reusing host language functions instead of recoding mechanisms of function application that are already available in the host language (the language which is used to build the interpreter). In order to transform user-defined functions into host language functions we use combinatory logic : lambda-abstractions are transformed into a composition of combinators. We provide a mechanically checked proof that this step is correct for the call-by-value strategy with imperative features.Comment: 33 page
    corecore