4,290 research outputs found

    Decreasing Diagrams for Confluence and Commutation

    Full text link
    Like termination, confluence is a central property of rewrite systems. Unlike for termination, however, there exists no known complexity hierarchy for confluence. In this paper we investigate whether the decreasing diagrams technique can be used to obtain such a hierarchy. The decreasing diagrams technique is one of the strongest and most versatile methods for proving confluence of abstract rewrite systems. It is complete for countable systems, and it has many well-known confluence criteria as corollaries. So what makes decreasing diagrams so powerful? In contrast to other confluence techniques, decreasing diagrams employ a labelling of the steps with labels from a well-founded order in order to conclude confluence of the underlying unlabelled relation. Hence it is natural to ask how the size of the label set influences the strength of the technique. In particular, what class of abstract rewrite systems can be proven confluent using decreasing diagrams restricted to 1 label, 2 labels, 3 labels, and so on? Surprisingly, we find that two labels suffice for proving confluence for every abstract rewrite system having the cofinality property, thus in particular for every confluent, countable system. Secondly, we show that this result stands in sharp contrast to the situation for commutation of rewrite relations, where the hierarchy does not collapse. Thirdly, investigating the possibility of a confluence hierarchy, we determine the first-order (non-)definability of the notion of confluence and related properties, using techniques from finite model theory. We find that in particular Hanf's theorem is fruitful for elegant proofs of undefinability of properties of abstract rewrite systems

    Proving Looping and Non-Looping Non-Termination by Finite Automata

    Get PDF
    A new technique is presented to prove non-termination of term rewriting. The basic idea is to find a non-empty regular language of terms that is closed under rewriting and does not contain normal forms. It is automated by representing the language by a tree automaton with a fixed number of states, and expressing the mentioned requirements in a SAT formula. Satisfiability of this formula implies non-termination. Our approach succeeds for many examples where all earlier techniques fail, for instance for the S-rule from combinatory logic

    Polynomial Path Orders: A Maximal Model

    Full text link
    This paper is concerned with the automated complexity analysis of term rewrite systems (TRSs for short) and the ramification of these in implicit computational complexity theory (ICC for short). We introduce a novel path order with multiset status, the polynomial path order POP*. Essentially relying on the principle of predicative recursion as proposed by Bellantoni and Cook, its distinct feature is the tight control of resources on compatible TRSs: The (innermost) runtime complexity of compatible TRSs is polynomially bounded. We have implemented the technique, as underpinned by our experimental evidence our approach to the automated runtime complexity analysis is not only feasible, but compared to existing methods incredibly fast. As an application in the context of ICC we provide an order-theoretic characterisation of the polytime computable functions. To be precise, the polytime computable functions are exactly the functions computable by an orthogonal constructor TRS compatible with POP*

    Complexity Hierarchies and Higher-Order Cons-Free Rewriting

    Get PDF
    Constructor rewriting systems are said to be cons-free if, roughly, constructor terms in the right-hand sides of rules are subterms of constructor terms in the left-hand side; the computational intuition is that rules cannot build new data structures. It is well-known that cons-free programming languages can be used to characterize computational complexity classes, and that cons-free first-order term rewriting can be used to characterize the set of polynomial-time decidable sets. We investigate cons-free higher-order term rewriting systems, the complexity classes they characterize, and how these depend on the order of the types used in the systems. We prove that, for every k ≥\geq 1, left-linear cons-free systems with type order k characterize Ek^kTIME if arbitrary evaluation is used (i.e., the system does not have a fixed reduction strategy). The main difference with prior work in implicit complexity is that (i) our results hold for non-orthogonal term rewriting systems with possible rule overlaps with no assumptions about reduction strategy, (ii) results for such term rewriting systems have previously only been obtained for k = 1, and with additional syntactic restrictions on top of cons-freeness and left-linearity. Our results are apparently among the first implicit characterizations of the hierarchy E = E1^1TIME ⊆\subseteq E2^2TIME ⊆\subseteq .... Our work confirms prior results that having full non-determinism (via overlaps of rules) does not directly allow characterization of non-deterministic complexity classes like NE. We also show that non-determinism makes the classes characterized highly sensitive to minor syntactic changes such as admitting product types or non-left-linear rules.Comment: Extended version (with appendices) of a paper published in FSCD 201

    Decreasing diagrams for confluence and commutation

    Get PDF
    Like termination, confluence is a central property of rewrite systems. Unlike for termination, however, there exists no known complexity hierarchy for confluence. In this paper we investigate whether the decreasing diagrams technique can be used to obtain such a hierarchy. The decreasing diagrams technique is one of the strongest and most versatile methods for proving confluence of abstract rewrite systems. It is complete for countable systems, and it has many well-known confluence criteria as corollaries. So what makes decreasing diagrams so powerful? In contrast to other confluence techniques, decreasing diagrams employ a labelling of the steps with labels from a wellfounded order in order to conclude confluence of the underlying unlabelled relation. Hence it is natural to ask how the size of the label set influences the strength of the technique. In particular, what class of abstract rewrite systems can be proven confluent using decreasing diagrams restricted to 1 label, 2 labels, 3 labels, and so on? Surprisingly, we find that two labels suffice for proving confluence for every abstract rewrite system having the cofinality property, thus in particular for every confluent, countable system. Secondly, we show that this result stands in sharp contrast to the situation for commutation of rewrite relations, where the hierarchy does not collapse. Thirdly, investigating the possibility of a confluence hierarchy, we determine the first-order (non-)definability of the notion of confluence and related properties, using techniques from finite model theory. We find that in particular Hanf ’s theorem is fruitful for elegant proofs of undefinability of properties of abstract rewrite systems

    A rewriting approach to binary decision diagrams

    Get PDF

    A rewriting approach to binary decision diagrams

    Get PDF
    AbstractBinary decision diagrams (BDDs) provide an established technique for propositional formula manipulation. In this paper, we present the basic BDD theory by means of standard rewriting techniques. Since a BDD is a DAG instead of a tree we need a notion of shared rewriting and develop appropriate theory. A rewriting system is presented by which canonical reduced ordered BDDs (ROBDDs) can be obtained and for which uniqueness of ROBDD representation is proved. Next, an alternative rewriting system is presented, suitable for actually computing ROBDDs from formulas. For this rewriting system a layerwise strategy is defined, and it is proved that when replacing the classical apply-algorithm by layerwise rewriting, roughly the same complexity bound is reached as in the classical algorithm. Moreover, a layerwise innermost strategy is defined and it is proved that the full classical algorithm for computing ROBDDs can be replaced by layerwise innermost rewriting without essentially affecting the complexity. Finally a lazy strategy is proposed sometimes performing much better than the traditional algorithm
    • …
    corecore