367 research outputs found

    Automated Termination Proofs for Logic Programs by Term Rewriting

    Full text link
    There are two kinds of approaches for termination analysis of logic programs: "transformational" and "direct" ones. Direct approaches prove termination directly on the basis of the logic program. Transformational approaches transform a logic program into a term rewrite system (TRS) and then analyze termination of the resulting TRS instead. Thus, transformational approaches make all methods previously developed for TRSs available for logic programs as well. However, the applicability of most existing transformations is quite restricted, as they can only be used for certain subclasses of logic programs. (Most of them are restricted to well-moded programs.) In this paper we improve these transformations such that they become applicable for any definite logic program. To simulate the behavior of logic programs by TRSs, we slightly modify the notion of rewriting by permitting infinite terms. We show that our transformation results in TRSs which are indeed suitable for automated termination analysis. In contrast to most other methods for termination of logic programs, our technique is also sound for logic programming without occur check, which is typically used in practice. We implemented our approach in the termination prover AProVE and successfully evaluated it on a large collection of examples.Comment: 49 page

    Termination of Rewriting with and Automated Synthesis of Forbidden Patterns

    Full text link
    We introduce a modified version of the well-known dependency pair framework that is suitable for the termination analysis of rewriting under forbidden pattern restrictions. By attaching contexts to dependency pairs that represent the calling contexts of the corresponding recursive function calls, it is possible to incorporate the forbidden pattern restrictions in the (adapted) notion of dependency pair chains, thus yielding a sound and complete approach to termination analysis. Building upon this contextual dependency pair framework we introduce a dependency pair processor that simplifies problems by analyzing the contextual information of the dependency pairs. Moreover, we show how this processor can be used to synthesize forbidden patterns suitable for a given term rewriting system on-the-fly during the termination analysis.Comment: In Proceedings IWS 2010, arXiv:1012.533

    Stream Productivity by Outermost Termination

    Full text link
    Streams are infinite sequences over a given data type. A stream specification is a set of equations intended to define a stream. A core property is productivity: unfolding the equations produces the intended stream in the limit. In this paper we show that productivity is equivalent to termination with respect to the balanced outermost strategy of a TRS obtained by adding an additional rule. For specifications not involving branching symbols balancedness is obtained for free, by which tools for proving outermost termination can be used to prove productivity fully automatically

    12th International Workshop on Termination (WST 2012) : WST 2012, February 19–23, 2012, Obergurgl, Austria / ed. by Georg Moser

    Get PDF
    This volume contains the proceedings of the 12th International Workshop on Termination (WST 2012), to be held February 19–23, 2012 in Obergurgl, Austria. The goal of the Workshop on Termination is to be a venue for presentation and discussion of all topics in and around termination. In this way, the workshop tries to bridge the gaps between different communities interested and active in research in and around termination. The 12th International Workshop on Termination in Obergurgl continues the successful workshops held in St. Andrews (1993), La Bresse (1995), Ede (1997), Dagstuhl (1999), Utrecht (2001), Valencia (2003), Aachen (2004), Seattle (2006), Paris (2007), Leipzig (2009), and Edinburgh (2010). The 12th International Workshop on Termination did welcome contributions on all aspects of termination and complexity analysis. Contributions from the imperative, constraint, functional, and logic programming communities, and papers investigating applications of complexity or termination (for example in program transformation or theorem proving) were particularly welcome. We did receive 18 submissions which all were accepted. Each paper was assigned two reviewers. In addition to these 18 contributed talks, WST 2012, hosts three invited talks by Alexander Krauss, Martin Hofmann, and Fausto Spoto

    Loops under Strategies ... Continued

    Full text link
    While there are many approaches for automatically proving termination of term rewrite systems, up to now there exist only few techniques to disprove their termination automatically. Almost all of these techniques try to find loops, where the existence of a loop implies non-termination of the rewrite system. However, most programming languages use specific evaluation strategies, whereas loop detection techniques usually do not take strategies into account. So even if a rewrite system has a loop, it may still be terminating under certain strategies. Therefore, our goal is to develop decision procedures which can determine whether a given loop is also a loop under the respective evaluation strategy. In earlier work, such procedures were presented for the strategies of innermost, outermost, and context-sensitive evaluation. In the current paper, we build upon this work and develop such decision procedures for important strategies like leftmost-innermost, leftmost-outermost, (max-)parallel-innermost, (max-)parallel-outermost, and forbidden patterns (which generalize innermost, outermost, and context-sensitive strategies). In this way, we obtain the first approach to disprove termination under these strategies automatically.Comment: In Proceedings IWS 2010, arXiv:1012.533

    Extending Context-Sensitivity in Term Rewriting

    Full text link
    We propose a generalized version of context-sensitivity in term rewriting based on the notion of "forbidden patterns". The basic idea is that a rewrite step should be forbidden if the redex to be contracted has a certain shape and appears in a certain context. This shape and context is expressed through forbidden patterns. In particular we analyze the relationships among this novel approach and the commonly used notion of context-sensitivity in term rewriting, as well as the feasibility of rewriting with forbidden patterns from a computational point of view. The latter feasibility is characterized by demanding that restricting a rewrite relation yields an improved termination behaviour while still being powerful enough to compute meaningful results. Sufficient criteria for both kinds of properties in certain classes of rewrite systems with forbidden patterns are presented

    Extensional and Intensional Strategies

    Full text link
    This paper is a contribution to the theoretical foundations of strategies. We first present a general definition of abstract strategies which is extensional in the sense that a strategy is defined explicitly as a set of derivations of an abstract reduction system. We then move to a more intensional definition supporting the abstract view but more operational in the sense that it describes a means for determining such a set. We characterize the class of extensional strategies that can be defined intensionally. We also give some hints towards a logical characterization of intensional strategies and propose a few challenging perspectives

    Multiset Path Orderings and Their Application to Termination of Term Rewriting Systems

    Get PDF
    In this expository paper, a comprehensive study of multiset orderings, nested multiset orderings and multiset path orderings is presented. In particular, it is illustrated how multiset path orderings admit the use of relatively simple and intuitive termination functions that lead to termination of  a class of term rewriting systems

    Cell libraries and verification

    Get PDF
    Digital electronic devices are often implemented using cell libraries to provide the basic logic elements, such as Boolean functions and on-chip memories. To be usable both during the development of chips, which is usually done in a hardware definition language, and for the final layout, which consists of lithographic masks, cells are described in multiple ways. Among these, there are multiple descriptions of the behavior of cells, for example one at the level of hardware definition languages, and another one in terms of transistors that are ultimately produced. Thus, correct functioning of the device depends also on the correctness of the cell library, requiring all views of a cell to correspond with each other. In this thesis, techniques are presented to verify some of these correspondences in cell libraries. First, a technique is presented to check that the functional description in a hardware definition language and the transistor netlist description implement the same behavior. For this purpose, a semantics is defined for the commonly used subset of the hardware definition language Verilog. This semantics is encoded into Boolean equations, which can also be extracted from a transistor netlist. A model checker is then used to prove equivalence of these two descriptions, or to provide a counterexample showing that they are different. Also in basic elements such as cells, there exists non-determinism reflecting internal behavior that cannot be controlled from the outside. It is however desired that such internal behavior does not lead to different externally observable behavior, i.e., to different computation results. This thesis presents a technique to efficiently check, both for hardware definition language descriptions and transistor netlist descriptions, whether non-determinism does have an effect on the observable computation or not. Power consumption of chips has become a very important topic, especially since devices become mobile and therefore are battery powered. Thus, in order to predict and to maximize battery life, the power consumption of cells should be measured and reduced in an efficient way. To achieve these goals, this thesis also takes the power consumption into account when analyzing non-deterministic behavior. Then, on the one hand, behaviors consuming the same amount of power have to be measured only once. On the other hand, functionally equivalent computations can be forced to consume the least amount of power without affecting the externally observable behavior of the cell, for example by introducing appropriate delays. A way to prevent externally observable non-deterministic behavior in practical hardware designs is by adding timing checks. These checks rule out certain input patterns which must not be generated by the environment of a cell. If an input pattern can be found that is not forbidden by any of the timing checks, yet allows non-deterministic behavior, then the cell’s environment is not sufficiently restricted and hence this usually indicates a forgotten timing check. Therefore, the check for non-determinism is extended to also respect these timing checks and to consider only counterexamples that are not ruled out. If such a counterexample can be found, then it gives an indication what timing checks need to be added. Because current hardware designs run at very high speeds, timing analysis of cells has become a very important issue. For this purpose, cell libraries include a description of the delay arcs present in a cell, giving an amount of time it takes for an input change to have propagated to the outputs of a cell. Also for these descriptions, it is desired that they reflect the actual behavior in the cell. On the one hand, a delay arc that never manifests itself may result in a clock frequency that is lower than necessary. On the other hand, a forgotten delay arc can cause the clock frequency being too high, impairing functioning of the final chip. To relate the functional description of a cell with its timing specification, this thesis presents techniques to check whether delay arcs are consistent with the functionality, and which list all possible delay arcs. Computing new output values of a cell given some new input values requires all connections among the transistors in a cell to obtain stable values. Hitherto it was assumed that such a stable situation will always be reached eventually. To actually check this, a wire is abstracted into a sequence of stable values. Using this abstraction, checking whether stable situations are always reached is reduced to analyzing that an infinite sequence of such stable values exists. This is known in the term rewriting literature as productivity, the infinitary equivalent to termination. The final contribution in this thesis are techniques to automatically prove productivity. For this purpose, existing termination proving tools for term rewriting are re-used to benefit from their tremendous strength and their continuous improvements

    Verification of Imperative Programs by Constraint Logic Program Transformation

    Full text link
    We present a method for verifying partial correctness properties of imperative programs that manipulate integers and arrays by using techniques based on the transformation of constraint logic programs (CLP). We use CLP as a metalanguage for representing imperative programs, their executions, and their properties. First, we encode the correctness of an imperative program, say prog, as the negation of a predicate 'incorrect' defined by a CLP program T. By construction, 'incorrect' holds in the least model of T if and only if the execution of prog from an initial configuration eventually halts in an error configuration. Then, we apply to program T a sequence of transformations that preserve its least model semantics. These transformations are based on well-known transformation rules, such as unfolding and folding, guided by suitable transformation strategies, such as specialization and generalization. The objective of the transformations is to derive a new CLP program TransfT where the predicate 'incorrect' is defined either by (i) the fact 'incorrect.' (and in this case prog is not correct), or by (ii) the empty set of clauses (and in this case prog is correct). In the case where we derive a CLP program such that neither (i) nor (ii) holds, we iterate the transformation. Since the problem is undecidable, this process may not terminate. We show through examples that our method can be applied in a rather systematic way, and is amenable to automation by transferring to the field of program verification many techniques developed in the field of program transformation.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455
    • 

    corecore