6,677 research outputs found
Correctness of an STM Haskell implementation
A concurrent implementation of software transactional memory in Concurrent Haskell using a call-by-need functional language with processes and futures is given. The description of the small-step operational semantics is precise and explicit, and employs an early abort of conflicting transactions. A proof of correctness of the implementation is given for a contextual semantics with may- and should-convergence. This implies that our implementation is a correct evaluator for an abstract specification equipped with a big-step semantics
Adequacy of compositional translations for observational semantics
We investigate methods and tools for analysing translations between programming languages with respect to observational semantics. The behaviour of programs is observed in terms of may- and must-convergence in arbitrary contexts, and adequacy of translations, i.e., the reflection of program equivalence, is taken to be the fundamental correctness condition. For compositional translations we propose a notion of convergence equivalence as a means for proving adequacy. This technique avoids explicit reasoning about contexts, and is able to deal with the subtle role of typing in implementations of language extension
Reconstructing a logic for inductive proofs of properties of functional programs
A logical framework consisting of a polymorphic call-by-value functional language and a first-order logic on the values is presented, which is a reconstruction of the logic of the verification system VeriFun. The reconstruction uses contextual semantics to define the logical value of equations. It equates undefinedness and non-termination, which is a standard semantical approach. The main results of this paper are: Meta-theorems about the globality of several classes of theorems in the logic, and proofs of global correctness of transformations and deduction rules. The deduction rules of VeriFun are globally correct if rules depending on termination are appropriately formulated. The reconstruction also gives hints on generalizations of the VeriFun framework: reasoning on nonterminating expressions and functions, mutual recursive functions and abstractions in the data values, and formulas with arbitrary quantifier prefix could be allowed
A finite simulation method in a non-deterministic call-by-need calculus with letrec, constructors and case
The paper proposes a variation of simulation for checking and proving contextual equivalence in a non-deterministic call-by-need lambda-calculus with constructors, case, seq, and a letrec with cyclic dependencies. It also proposes a novel method to prove its correctness. The calculus' semantics is based on a small-step rewrite semantics and on may-convergence. The cyclic nature of letrec bindings, as well as non-determinism, makes known approaches to prove that simulation implies contextual equivalence, such as Howe's proof technique, inapplicable in this setting. The basic technique for the simulation as well as the correctness proof is called pre-evaluation, which computes a set of answers for every closed expression. If simulation succeeds in finite computation depth, then it is guaranteed to show contextual preorder of expressions
Enhancing Predicate Pairing with Abstraction for Relational Verification
Relational verification is a technique that aims at proving properties that
relate two different program fragments, or two different program runs. It has
been shown that constrained Horn clauses (CHCs) can effectively be used for
relational verification by applying a CHC transformation, called predicate
pairing, which allows the CHC solver to infer relations among arguments of
different predicates. In this paper we study how the effects of the predicate
pairing transformation can be enhanced by using various abstract domains based
on linear arithmetic (i.e., the domain of convex polyhedra and some of its
subdomains) during the transformation. After presenting an algorithm for
predicate pairing with abstraction, we report on the experiments we have
performed on over a hundred relational verification problems by using various
abstract domains. The experiments have been performed by using the VeriMAP
transformation and verification system, together with the Parma Polyhedra
Library (PPL) and the Z3 solver for CHCs.Comment: Pre-proceedings paper presented at the 27th International Symposium
on Logic-Based Program Synthesis and Transformation (LOPSTR 2017), Namur,
Belgium, 10-12 October 2017 (arXiv:1708.07854
Lower-bound Time-Complexity Analysis of Logic Programs
The paper proposes a technique for inferring conditions on goals that, when satisfied, ensure that a goal is sufficiently coarse-grained to warrant parallel evaluation. The method is powerful enough to reason about divide-and-conquer programs, and in the case of quicksort, for instance, can infer that a quicksort goal has a time complexity that exceeds 64 resolution steps (a threshold for spawning) if the input list is of length 10 or more. This gives a simple run-time tactic for controlling spawning. The method has been proved correct, can be implemented straightforwardly, has been demonstrated to be useful on a parallel machine, and, in contrast with much of the previous work on time-complexity analysis of logic programs, does not require any complicated difference equation solving machinery
An Abstract Interpretation-based Model of Tracing Just-In-Time Compilation
Tracing just-in-time compilation is a popular compilation technique for the
efficient implementation of dynamic languages, which is commonly used for
JavaScript, Python and PHP. We provide a formal model of tracing JIT
compilation of programs using abstract interpretation. Hot path detection
corresponds to an abstraction of the trace semantics of the program. The
optimization phase corresponds to a transform of the original program that
preserves its trace semantics up to an observation modeled by some abstraction.
We provide a generic framework to express dynamic optimizations and prove them
correct. We instantiate it to prove the correctness of dynamic type
specialization and constant variable folding. We show that our framework is
more general than the model of tracing compilation introduced by Guo and
Palsberg [2011] based on operational bisimulations.Comment: To appear in ACM Transactions on Programming Languages and System
Towards Correctness of Program Transformations Through Unification and Critical Pair Computation
Correctness of program transformations in extended lambda calculi with a
contextual semantics is usually based on reasoning about the operational
semantics which is a rewrite semantics. A successful approach to proving
correctness is the combination of a context lemma with the computation of
overlaps between program transformations and the reduction rules, and then of
so-called complete sets of diagrams. The method is similar to the computation
of critical pairs for the completion of term rewriting systems. We explore
cases where the computation of these overlaps can be done in a first order way
by variants of critical pair computation that use unification algorithms. As a
case study we apply the method to a lambda calculus with recursive
let-expressions and describe an effective unification algorithm to determine
all overlaps of a set of transformations with all reduction rules. The
unification algorithm employs many-sorted terms, the equational theory of
left-commutativity modelling multi-sets, context variables of different kinds
and a mechanism for compactly representing binding chains in recursive
let-expressions.Comment: In Proceedings UNIF 2010, arXiv:1012.455
- …