1,997 research outputs found

    Lazy rewriting on eager machinery

    Get PDF
    We define Lazy Term Rewriting Systems and show that they can be realized by local adaptations of an eager implementation of conventional term rewriting systems. The overhead of lazy evaluation is only incurred when lazy evaluation is actually performed. Our method is modelled by a transformation of term rewriting systems, which concisely expresses the intricate interaction between pattern matching and lazy evaluation. The method easily extends to term graph rewriting

    Type classes for efficient exact real arithmetic in Coq

    Get PDF
    Floating point operations are fast, but require continuous effort on the part of the user in order to ensure that the results are correct. This burden can be shifted away from the user by providing a library of exact analysis in which the computer handles the error estimates. Previously, we [Krebbers/Spitters 2011] provided a fast implementation of the exact real numbers in the Coq proof assistant. Our implementation improved on an earlier implementation by O'Connor by using type classes to describe an abstract specification of the underlying dense set from which the real numbers are built. In particular, we used dyadic rationals built from Coq's machine integers to obtain a 100 times speed up of the basic operations already. This article is a substantially expanded version of [Krebbers/Spitters 2011] in which the implementation is extended in the various ways. First, we implement and verify the sine and cosine function. Secondly, we create an additional implementation of the dense set based on Coq's fast rational numbers. Thirdly, we extend the hierarchy to capture order on undecidable structures, while it was limited to decidable structures before. This hierarchy, based on type classes, allows us to share theory on the naturals, integers, rationals, dyadics, and reals in a convenient way. Finally, we obtain another dramatic speed-up by avoiding evaluation of termination proofs at runtime.Comment: arXiv admin note: text overlap with arXiv:1105.275

    Maximal Sharing in the Lambda Calculus with letrec

    Full text link
    Increasing sharing in programs is desirable to compactify the code, and to avoid duplication of reduction work at run-time, thereby speeding up execution. We show how a maximal degree of sharing can be obtained for programs expressed as terms in the lambda calculus with letrec. We introduce a notion of `maximal compactness' for lambda-letrec-terms among all terms with the same infinite unfolding. Instead of defined purely syntactically, this notion is based on a graph semantics. lambda-letrec-terms are interpreted as first-order term graphs so that unfolding equivalence between terms is preserved and reflected through bisimilarity of the term graph interpretations. Compactness of the term graphs can then be compared via functional bisimulation. We describe practical and efficient methods for the following two problems: transforming a lambda-letrec-term into a maximally compact form; and deciding whether two lambda-letrec-terms are unfolding-equivalent. The transformation of a lambda-letrec-term LL into maximally compact form L0L_0 proceeds in three steps: (i) translate L into its term graph G=[[L]]G = [[ L ]]; (ii) compute the maximally shared form of GG as its bisimulation collapse G0G_0; (iii) read back a lambda-letrec-term L0L_0 from the term graph G0G_0 with the property [[L0]]=G0[[ L_0 ]] = G_0. This guarantees that L0L_0 and LL have the same unfolding, and that L0L_0 exhibits maximal sharing. The procedure for deciding whether two given lambda-letrec-terms L1L_1 and L2L_2 are unfolding-equivalent computes their term graph interpretations [[L1]][[ L_1 ]] and [[L2]][[ L_2 ]], and checks whether these term graphs are bisimilar. For illustration, we also provide a readily usable implementation.Comment: 18 pages, plus 19 pages appendi

    Within ARM's reach : compilation of left-linear rewrite systems via minimalrewrite systems

    Get PDF
    A new compilation technique for left-linear term rewriting systems is presented, where rewrite rules are transformed into so-called minimal rewrite rules. These minimal rules have such a simple form that they can be viewed as instructions for an abstract rewriting machine (ARM)

    Extending Context-Sensitivity in Term Rewriting

    Full text link
    We propose a generalized version of context-sensitivity in term rewriting based on the notion of "forbidden patterns". The basic idea is that a rewrite step should be forbidden if the redex to be contracted has a certain shape and appears in a certain context. This shape and context is expressed through forbidden patterns. In particular we analyze the relationships among this novel approach and the commonly used notion of context-sensitivity in term rewriting, as well as the feasibility of rewriting with forbidden patterns from a computational point of view. The latter feasibility is characterized by demanding that restricting a rewrite relation yields an improved termination behaviour while still being powerful enough to compute meaningful results. Sufficient criteria for both kinds of properties in certain classes of rewrite systems with forbidden patterns are presented
    corecore