78,596 research outputs found

    In and Out of SSA : a Denotational Specification

    No full text
    International audienceWe present non-standard denotational specifications of the SSA form and of its conversion processes from and to imperative programming languages. Thus, we provide a strong mathematical foundation for this intermediate code representation language used in modern compilers such as GCC or Intel CC. More specifically, we provide (1) a new functional approach to SSA, the Static Single Assignment form, together with its denotational semantics, (2) a collecting denotational semantics for a simple imperative language Imp, (3) a non-standard denotational semantics specifying the conversion of Imp to SSA and (4) a non-standard denotational semantics for the reverse SSA to Imp conversion process. These translations are proven correct, ensuring that the structure of the memory states manipulated by imperative constructs is preserved in compilers' middle ends that use the SSA form as control-flow data representation. Interestingly, a s unexpected by-products of our conversion procedures, we offer (1) a new proof of the reducibility of the RAM computing model to the domain of Kleene's partial recursive functions, to which SSA is strongly related, and, on a more practical note, (2) a new algorithm to perform program slicing in imperative programming languages. All these specifications have been prototyped using GNU Common Lisp. These fundamental results prove that the widely used SSA technology is sound. Our formal denotational framework further suggests that the SSA form could become a target of choice for other optimization analysis techniques such as abstract interpretation or partial evaluation. Indeed, since the SSA form is language-independent, the resulting optimizations would be automatically enabled for any source language supported by compilers such as GCC

    Test Case Generation for Object-Oriented Imperative Languages in CLP

    Full text link
    Testing is a vital part of the software development process. Test Case Generation (TCG) is the process of automatically generating a collection of test cases which are applied to a system under test. White-box TCG is usually performed by means of symbolic execution, i.e., instead of executing the program on normal values (e.g., numbers), the program is executed on symbolic values representing arbitrary values. When dealing with an object-oriented (OO) imperative language, symbolic execution becomes challenging as, among other things, it must be able to backtrack, complex heap-allocated data structures should be created during the TCG process and features like inheritance, virtual invocations and exceptions have to be taken into account. Due to its inherent symbolic execution mechanism, we pursue in this paper that Constraint Logic Programming (CLP) has a promising unexploited application field in TCG. We will support our claim by developing a fully CLP-based framework to TCG of an OO imperative language, and by assessing it on a corresponding implementation on a set of challenging Java programs. A unique characteristic of our approach is that it handles all language features using only CLP and without the need of developing specific constraint operators (e.g., to model the heap)

    Verification of Imperative Programs by Constraint Logic Program Transformation

    Full text link
    We present a method for verifying partial correctness properties of imperative programs that manipulate integers and arrays by using techniques based on the transformation of constraint logic programs (CLP). We use CLP as a metalanguage for representing imperative programs, their executions, and their properties. First, we encode the correctness of an imperative program, say prog, as the negation of a predicate 'incorrect' defined by a CLP program T. By construction, 'incorrect' holds in the least model of T if and only if the execution of prog from an initial configuration eventually halts in an error configuration. Then, we apply to program T a sequence of transformations that preserve its least model semantics. These transformations are based on well-known transformation rules, such as unfolding and folding, guided by suitable transformation strategies, such as specialization and generalization. The objective of the transformations is to derive a new CLP program TransfT where the predicate 'incorrect' is defined either by (i) the fact 'incorrect.' (and in this case prog is not correct), or by (ii) the empty set of clauses (and in this case prog is correct). In the case where we derive a CLP program such that neither (i) nor (ii) holds, we iterate the transformation. Since the problem is undecidable, this process may not terminate. We show through examples that our method can be applied in a rather systematic way, and is amenable to automation by transferring to the field of program verification many techniques developed in the field of program transformation.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455

    Test Data Generation of Bytecode by CLP Partial Evaluation

    Full text link
    We employ existing partial evaluation (PE) techniques developed for Constraint Logic Programming (CLP) in order to automatically generate test-case generators for glass-box testing of bytecode. Our approach consists of two independent CLP PE phases. (1) First, the bytecode is transformed into an equivalent (decompiled) CLP program. This is already a well studied transformation which can be done either by using an ad-hoc decompiler or by specialising a bytecode interpreter by means of existing PE techniques. (2) A second PE is performed in order to supervise the generation of test-cases by execution of the CLP decompiled program. Interestingly, we employ control strategies previously defined in the context of CLP PE in order to capture coverage criteria for glass-box testing of bytecode. A unique feature of our approach is that, this second PE phase allows generating not only test-cases but also test-case generators. To the best of our knowledge, this is the first time that (CLP) PE techniques are applied for test-case generation as well as to generate test-case generators

    Verification of Java Bytecode using Analysis and Transformation of Logic Programs

    Full text link
    State of the art analyzers in the Logic Programming (LP) paradigm are nowadays mature and sophisticated. They allow inferring a wide variety of global properties including termination, bounds on resource consumption, etc. The aim of this work is to automatically transfer the power of such analysis tools for LP to the analysis and verification of Java bytecode (JVML). In order to achieve our goal, we rely on well-known techniques for meta-programming and program specialization. More precisely, we propose to partially evaluate a JVML interpreter implemented in LP together with (an LP representation of) a JVML program and then analyze the residual program. Interestingly, at least for the examples we have studied, our approach produces very simple LP representations of the original JVML programs. This can be seen as a decompilation from JVML to high-level LP source. By reasoning about such residual programs, we can automatically prove in the CiaoPP system some non-trivial properties of JVML programs such as termination, run-time error freeness and infer bounds on its resource consumption. We are not aware of any other system which is able to verify such advanced properties of Java bytecode

    Quantum Programming Made Easy

    Get PDF
    We present IQu, namely a quantum programming language that extends Reynold's Idealized Algol, the paradigmatic core of Algol-like languages. IQu combines imperative programming with high-order features, mediated by a simple type theory. IQu mildly merges its quantum features with the classical programming style that we can experiment through Idealized Algol, the aim being to ease a transition towards the quantum programming world. The proposed extension is done along two main directions. First, IQu makes the access to quantum co-processors by means of quantum stores. Second, IQu includes some support for the direct manipulation of quantum circuits, in accordance with recent trends in the development of quantum programming languages. Finally, we show that IQu is quite effective in expressing well-known quantum algorithms.Comment: In Proceedings Linearity-TLLA 2018, arXiv:1904.0615

    Applying Formal Methods to Networking: Theory, Techniques and Applications

    Full text link
    Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design---especially, the software defined networking (SDN) paradigm---offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.Comment: 30 pages, submitted to IEEE Communications Surveys and Tutorial

    Size-Change Termination as a Contract

    Full text link
    Termination is an important but undecidable program property, which has led to a large body of work on static methods for conservatively predicting or enforcing termination. One such method is the size-change termination approach of Lee, Jones, and Ben-Amram, which operates in two phases: (1) abstract programs into "size-change graphs," and (2) check these graphs for the size-change property: the existence of paths that lead to infinite decreasing sequences. We transpose these two phases with an operational semantics that accounts for the run-time enforcement of the size-change property, postponing (or entirely avoiding) program abstraction. This choice has two key consequences: (1) size-change termination can be checked at run-time and (2) termination can be rephrased as a safety property analyzed using existing methods for systematic abstraction. We formulate run-time size-change checks as contracts in the style of Findler and Felleisen. The result compliments existing contracts that enforce partial correctness specifications to obtain contracts for total correctness. Our approach combines the robustness of the size-change principle for termination with the precise information available at run-time. It has tunable overhead and can check for nontermination without the conservativeness necessary in static checking. To obtain a sound and computable termination analysis, we apply existing abstract interpretation techniques directly to the operational semantics, avoiding the need for custom abstractions for termination. The resulting analyzer is competitive with with existing, purpose-built analyzers
    corecore