5,348 research outputs found
A Framework for Datatype Transformation
We study one dimension in program evolution, namely the evolution of the
datatype declarations in a program. To this end, a suite of basic
transformation operators is designed. We cover structure-preserving
refactorings, but also structure-extending and -reducing adaptations. Both the
object programs that are subject to datatype transformations, and the meta
programs that encode datatype transformations are functional programs.Comment: Minor revision; now accepted at LDTA 200
Proving Correctness of Imperative Programs by Linearizing Constrained Horn Clauses
We present a method for verifying the correctness of imperative programs
which is based on the automated transformation of their specifications. Given a
program prog, we consider a partial correctness specification of the form
prog , where the assertions and are
predicates defined by a set Spec of possibly recursive Horn clauses with linear
arithmetic (LA) constraints in their premise (also called constrained Horn
clauses). The verification method consists in constructing a set PC of
constrained Horn clauses whose satisfiability implies that prog
is valid. We highlight some limitations of state-of-the-art
constrained Horn clause solving methods, here called LA-solving methods, which
prove the satisfiability of the clauses by looking for linear arithmetic
interpretations of the predicates. In particular, we prove that there exist
some specifications that cannot be proved valid by any of those LA-solving
methods. These specifications require the proof of satisfiability of a set PC
of constrained Horn clauses that contain nonlinear clauses (that is, clauses
with more than one atom in their premise). Then, we present a transformation,
called linearization, that converts PC into a set of linear clauses (that is,
clauses with at most one atom in their premise). We show that several
specifications that could not be proved valid by LA-solving methods, can be
proved valid after linearization. We also present a strategy for performing
linearization in an automatic way and we report on some experimental results
obtained by using a preliminary implementation of our method.Comment: To appear in Theory and Practice of Logic Programming (TPLP),
Proceedings of ICLP 201
Finite Countermodel Based Verification for Program Transformation (A Case Study)
Both automatic program verification and program transformation are based on
program analysis. In the past decade a number of approaches using various
automatic general-purpose program transformation techniques (partial deduction,
specialization, supercompilation) for verification of unreachability properties
of computing systems were introduced and demonstrated. On the other hand, the
semantics based unfold-fold program transformation methods pose themselves
diverse kinds of reachability tasks and try to solve them, aiming at improving
the semantics tree of the program being transformed. That means some
general-purpose verification methods may be used for strengthening program
transformation techniques. This paper considers the question how finite
countermodels for safety verification method might be used in Turchin's
supercompilation method. We extract a number of supercompilation sub-algorithms
trying to solve reachability problems and demonstrate use of an external
countermodel finder for solving some of the problems.Comment: In Proceedings VPT 2015, arXiv:1512.0221
More on Unfold/Fold Transformations of Normal Programs: Preservation of Fitting's Semantics
The unfold/fold transformation system defined by Tamaki and Sato was meant for definite programs. It transforms a program into an equivalent one in the sense of both the least Herbrand model semantics and the Computed Answer Substitution semantics. Seki extended the method to normal programs and specialized it in order to preserve also the finite failure set. The resulting system is correct wrt nearly all the declarative semantics for normal programs. An exception is Fitting's model semantics. In this paper we consider a slight variation of Seki's method and we study its correctness wrt Fitting's semantics. We define an applicability condition for the fold operation and we show that it ensures the preservation of the considered semantics through the transformation
On Sharing, Memoization, and Polynomial Time (Long Version)
We study how the adoption of an evaluation mechanism with sharing and
memoization impacts the class of functions which can be computed in polynomial
time. We first show how a natural cost model in which lookup for an already
computed value has no cost is indeed invariant. As a corollary, we then prove
that the most general notion of ramified recurrence is sound for polynomial
time, this way settling an open problem in implicit computational complexity
Turchin's Relation for Call-by-Name Computations: A Formal Approach
Supercompilation is a program transformation technique that was first
described by V. F. Turchin in the 1970s. In supercompilation, Turchin's
relation as a similarity relation on call-stack configurations is used both for
call-by-value and call-by-name semantics to terminate unfolding of the program
being transformed. In this paper, we give a formal grammar model of
call-by-name stack behaviour. We classify the model in terms of the Chomsky
hierarchy and then formally prove that Turchin's relation can terminate all
computations generated by the model.Comment: In Proceedings VPT 2016, arXiv:1607.0183
- …