12,932 research outputs found
Finite Countermodel Based Verification for Program Transformation (A Case Study)
Both automatic program verification and program transformation are based on
program analysis. In the past decade a number of approaches using various
automatic general-purpose program transformation techniques (partial deduction,
specialization, supercompilation) for verification of unreachability properties
of computing systems were introduced and demonstrated. On the other hand, the
semantics based unfold-fold program transformation methods pose themselves
diverse kinds of reachability tasks and try to solve them, aiming at improving
the semantics tree of the program being transformed. That means some
general-purpose verification methods may be used for strengthening program
transformation techniques. This paper considers the question how finite
countermodels for safety verification method might be used in Turchin's
supercompilation method. We extract a number of supercompilation sub-algorithms
trying to solve reachability problems and demonstrate use of an external
countermodel finder for solving some of the problems.Comment: In Proceedings VPT 2015, arXiv:1512.0221
Verifying Temporal Regular Properties of Abstractions of Term Rewriting Systems
The tree automaton completion is an algorithm used for proving safety
properties of systems that can be modeled by a term rewriting system. This
representation and verification technique works well for proving properties of
infinite systems like cryptographic protocols or more recently on Java Bytecode
programs. This algorithm computes a tree automaton which represents a (regular)
over approximation of the set of reachable terms by rewriting initial terms.
This approach is limited by the lack of information about rewriting relation
between terms. Actually, terms in relation by rewriting are in the same
equivalence class: there are recognized by the same state in the tree
automaton.
Our objective is to produce an automaton embedding an abstraction of the
rewriting relation sufficient to prove temporal properties of the term
rewriting system.
We propose to extend the algorithm to produce an automaton having more
equivalence classes to distinguish a term or a subterm from its successors
w.r.t. rewriting. While ground transitions are used to recognize equivalence
classes of terms, epsilon-transitions represent the rewriting relation between
terms. From the completed automaton, it is possible to automatically build a
Kripke structure abstracting the rewriting sequence. States of the Kripke
structure are states of the tree automaton and the transition relation is given
by the set of epsilon-transitions. States of the Kripke structure are labelled
by the set of terms recognized using ground transitions. On this Kripke
structure, we define the Regular Linear Temporal Logic (R-LTL) for expressing
properties. Such properties can then be checked using standard model checking
algorithms. The only difference between LTL and R-LTL is that predicates are
replaced by regular sets of acceptable terms
State space c-reductions for concurrent systems in rewriting logic
We present c-reductions, a state space reduction technique.
The rough idea is to exploit some equivalence relation on states (possibly capturing system regularities) that preserves behavioral properties, and explore the induced quotient system. This is done by means of a canonizer
function, which maps each state into a (non necessarily unique) canonical representative of its equivalence class. The approach exploits the expressiveness of rewriting logic and its realization in Maude to enjoy several advantages over similar approaches: exibility and simplicity in
the definition of the reductions (supporting not only traditional symmetry reductions, but also name reuse and name abstraction); reasoning support for checking and proving correctness of the reductions; and automatization
of the reduction infrastructure via Maude's meta-programming
features. The approach has been validated over a set of representative case studies, exhibiting comparable results with respect to other tools
Towards Static Analysis of Functional Programs using Tree Automata Completion
This paper presents the first step of a wider research effort to apply tree
automata completion to the static analysis of functional programs. Tree
Automata Completion is a family of techniques for computing or approximating
the set of terms reachable by a rewriting relation. The completion algorithm we
focus on is parameterized by a set E of equations controlling the precision of
the approximation and influencing its termination. For completion to be used as
a static analysis, the first step is to guarantee its termination. In this
work, we thus give a sufficient condition on E and T(F) for completion
algorithm to always terminate. In the particular setting of functional
programs, this condition can be relaxed into a condition on E and T(C) (terms
built on the set of constructors) that is closer to what is done in the field
of static analysis, where abstractions are performed on data.Comment: Proceedings of WRLA'14. 201
Size-Change Termination as a Contract
Termination is an important but undecidable program property, which has led
to a large body of work on static methods for conservatively predicting or
enforcing termination. One such method is the size-change termination approach
of Lee, Jones, and Ben-Amram, which operates in two phases: (1) abstract
programs into "size-change graphs," and (2) check these graphs for the
size-change property: the existence of paths that lead to infinite decreasing
sequences.
We transpose these two phases with an operational semantics that accounts for
the run-time enforcement of the size-change property, postponing (or entirely
avoiding) program abstraction. This choice has two key consequences: (1)
size-change termination can be checked at run-time and (2) termination can be
rephrased as a safety property analyzed using existing methods for systematic
abstraction.
We formulate run-time size-change checks as contracts in the style of Findler
and Felleisen. The result compliments existing contracts that enforce partial
correctness specifications to obtain contracts for total correctness. Our
approach combines the robustness of the size-change principle for termination
with the precise information available at run-time. It has tunable overhead and
can check for nontermination without the conservativeness necessary in static
checking. To obtain a sound and computable termination analysis, we apply
existing abstract interpretation techniques directly to the operational
semantics, avoiding the need for custom abstractions for termination. The
resulting analyzer is competitive with with existing, purpose-built analyzers
CoLoR: a Coq library on well-founded rewrite relations and its application to the automated verification of termination certificates
Termination is an important property of programs; notably required for
programs formulated in proof assistants. It is a very active subject of
research in the Turing-complete formalism of term rewriting systems, where many
methods and tools have been developed over the years to address this problem.
Ensuring reliability of those tools is therefore an important issue. In this
paper we present a library formalizing important results of the theory of
well-founded (rewrite) relations in the proof assistant Coq. We also present
its application to the automated verification of termination certificates, as
produced by termination tools
Automatic Verification of Transactions on an Object-Oriented Database
In the context of the object-oriented data model, a compiletime approach is given that provides for a significant reduction of the amount of run-time transaction overhead due to integrity constraint checking. The higher-order logic Isabelle theorem prover is used to automatically prove which constraints might, or might not be violated by a given transaction in a manner analogous to the one used by Sheard and Stemple (1989) for the relational data model. A prototype transaction verification tool has been implemented, which automates the semantic mappings and generates proof goals for Isabelle. Test results are discussed to illustrate the effectiveness of our approach
Synthesis of sup-interpretations: a survey
In this paper, we survey the complexity of distinct methods that allow the
programmer to synthesize a sup-interpretation, a function providing an upper-
bound on the size of the output values computed by a program. It consists in a
static space analysis tool without consideration of the time consumption.
Although clearly related, sup-interpretation is independent from termination
since it only provides an upper bound on the terminating computations. First,
we study some undecidable properties of sup-interpretations from a theoretical
point of view. Next, we fix term rewriting systems as our computational model
and we show that a sup-interpretation can be obtained through the use of a
well-known termination technique, the polynomial interpretations. The drawback
is that such a method only applies to total functions (strongly normalizing
programs). To overcome this problem we also study sup-interpretations through
the notion of quasi-interpretation. Quasi-interpretations also suffer from a
drawback that lies in the subterm property. This property drastically restricts
the shape of the considered functions. Again we overcome this problem by
introducing a new notion of interpretations mainly based on the dependency
pairs method. We study the decidability and complexity of the
sup-interpretation synthesis problem for all these three tools over sets of
polynomials. Finally, we take benefit of some previous works on termination and
runtime complexity to infer sup-interpretations.Comment: (2012
Model Checking Linear Logic Specifications
The overall goal of this paper is to investigate the theoretical foundations
of algorithmic verification techniques for first order linear logic
specifications. The fragment of linear logic we consider in this paper is based
on the linear logic programming language called LO enriched with universally
quantified goal formulas. Although LO was originally introduced as a
theoretical foundation for extensions of logic programming languages, it can
also be viewed as a very general language to specify a wide range of
infinite-state concurrent systems.
Our approach is based on the relation between backward reachability and
provability highlighted in our previous work on propositional LO programs.
Following this line of research, we define here a general framework for the
bottom-up evaluation of first order linear logic specifications. The evaluation
procedure is based on an effective fixpoint operator working on a symbolic
representation of infinite collections of first order linear logic formulas.
The theory of well quasi-orderings can be used to provide sufficient conditions
for the termination of the evaluation of non trivial fragments of first order
linear logic.Comment: 53 pages, 12 figures "Under consideration for publication in Theory
and Practice of Logic Programming
- …