5,394 research outputs found

    Loops under Strategies ... Continued

    Full text link
    While there are many approaches for automatically proving termination of term rewrite systems, up to now there exist only few techniques to disprove their termination automatically. Almost all of these techniques try to find loops, where the existence of a loop implies non-termination of the rewrite system. However, most programming languages use specific evaluation strategies, whereas loop detection techniques usually do not take strategies into account. So even if a rewrite system has a loop, it may still be terminating under certain strategies. Therefore, our goal is to develop decision procedures which can determine whether a given loop is also a loop under the respective evaluation strategy. In earlier work, such procedures were presented for the strategies of innermost, outermost, and context-sensitive evaluation. In the current paper, we build upon this work and develop such decision procedures for important strategies like leftmost-innermost, leftmost-outermost, (max-)parallel-innermost, (max-)parallel-outermost, and forbidden patterns (which generalize innermost, outermost, and context-sensitive strategies). In this way, we obtain the first approach to disprove termination under these strategies automatically.Comment: In Proceedings IWS 2010, arXiv:1012.533

    From nominal to higher-order rewriting and back again

    Full text link
    We present a translation function from nominal rewriting systems (NRSs) to combinatory reduction systems (CRSs), transforming closed nominal rules and ground nominal terms to CRSs rules and terms, respectively, while preserving the rewriting relation. We also provide a reduction-preserving translation in the other direction, from CRSs to NRSs, improving over a previously defined translation. These tools, together with existing translations between CRSs and other higher-order rewriting formalisms, open up the path for a transfer of results between higher-order and nominal rewriting. In particular, techniques and properties of the rewriting relation, such as termination, can be exported from one formalism to the other.Comment: 41 pages, journa

    Reinterpreting Compression in Infinitary Rewriting

    Get PDF
    Departing from a computational interpretation of compression in infinitary rewriting, we view compression as a degenerate case of standardisation. The change in perspective comes about via two observations: (a) no compression property can be recovered for non-left-linear systems and (b) some standardisation procedures, as a ‘side-effect’, yield compressed reductions

    Higher-dimensional normalisation strategies for acyclicity

    Get PDF
    We introduce acyclic polygraphs, a notion of complete categorical cellular model for (small) categories, containing generators, relations and higher-dimensional globular syzygies. We give a rewriting method to construct explicit acyclic polygraphs from convergent presentations. For that, we introduce higher-dimensional normalisation strategies, defined as homotopically coherent ways to relate each cell of a polygraph to its normal form, then we prove that acyclicity is equivalent to the existence of a normalisation strategy. Using acyclic polygraphs, we define a higher-dimensional homotopical finiteness condition for higher categories which extends Squier's finite derivation type for monoids. We relate this homotopical property to a new homological finiteness condition that we introduce here.Comment: Final versio

    (Leftmost-Outermost) Beta Reduction is Invariant, Indeed

    Get PDF
    Slot and van Emde Boas' weak invariance thesis states that reasonable machines can simulate each other within a polynomially overhead in time. Is lambda-calculus a reasonable machine? Is there a way to measure the computational complexity of a lambda-term? This paper presents the first complete positive answer to this long-standing problem. Moreover, our answer is completely machine-independent and based over a standard notion in the theory of lambda-calculus: the length of a leftmost-outermost derivation to normal form is an invariant cost model. Such a theorem cannot be proved by directly relating lambda-calculus with Turing machines or random access machines, because of the size explosion problem: there are terms that in a linear number of steps produce an exponentially long output. The first step towards the solution is to shift to a notion of evaluation for which the length and the size of the output are linearly related. This is done by adopting the linear substitution calculus (LSC), a calculus of explicit substitutions modeled after linear logic proof nets and admitting a decomposition of leftmost-outermost derivations with the desired property. Thus, the LSC is invariant with respect to, say, random access machines. The second step is to show that LSC is invariant with respect to the lambda-calculus. The size explosion problem seems to imply that this is not possible: having the same notions of normal form, evaluation in the LSC is exponentially longer than in the lambda-calculus. We solve such an impasse by introducing a new form of shared normal form and shared reduction, deemed useful. Useful evaluation avoids those steps that only unshare the output without contributing to beta-redexes, i.e. the steps that cause the blow-up in size. The main technical contribution of the paper is indeed the definition of useful reductions and the thorough analysis of their properties.Comment: arXiv admin note: substantial text overlap with arXiv:1405.331

    On the confluence of lambda-calculus with conditional rewriting

    Get PDF
    The confluence of untyped \lambda-calculus with unconditional rewriting is now well un- derstood. In this paper, we investigate the confluence of \lambda-calculus with conditional rewriting and provide general results in two directions. First, when conditional rules are algebraic. This extends results of M\"uller and Dougherty for unconditional rewriting. Two cases are considered, whether \beta-reduction is allowed or not in the evaluation of conditions. Moreover, Dougherty's result is improved from the assumption of strongly normalizing \beta-reduction to weakly normalizing \beta-reduction. We also provide examples showing that outside these conditions, modularity of confluence is difficult to achieve. Second, we go beyond the algebraic framework and get new confluence results using a restricted notion of orthogonality that takes advantage of the conditional part of rewrite rules

    The Sigma-Semantics: A Comprehensive Semantics for Functional Programs

    Get PDF
    A comprehensive semantics for functional programs is presented, which generalizes the well-known call-by-value and call-by-name semantics. By permitting a separate choice between call-by value and call-by-name for every argument position of every function and parameterizing the semantics by this choice we abstract from the parameter-passing mechanism. Thus common and distinguishing features of all instances of the sigma-semantics, especially call-by-value and call-by-name semantics, are highlighted. Furthermore, a property can be validated for all instances of the sigma-semantics by a single proof. This is employed for proving the equivalence of the given denotational (fixed-point based) and two operational (reduction based) definitions of the sigma-semantics. We present and apply means for very simple proofs of equivalence with the denotational sigma-semantics for a large class of reduction-based sigma-semantics. Our basis are simple first-order constructor-based functional programs with patterns
    corecore