30 research outputs found

    Polynomial Path Orders: A Maximal Model

    Full text link
    This paper is concerned with the automated complexity analysis of term rewrite systems (TRSs for short) and the ramification of these in implicit computational complexity theory (ICC for short). We introduce a novel path order with multiset status, the polynomial path order POP*. Essentially relying on the principle of predicative recursion as proposed by Bellantoni and Cook, its distinct feature is the tight control of resources on compatible TRSs: The (innermost) runtime complexity of compatible TRSs is polynomially bounded. We have implemented the technique, as underpinned by our experimental evidence our approach to the automated runtime complexity analysis is not only feasible, but compared to existing methods incredibly fast. As an application in the context of ICC we provide an order-theoretic characterisation of the polytime computable functions. To be precise, the polytime computable functions are exactly the functions computable by an orthogonal constructor TRS compatible with POP*

    Polynomial Path Orders

    Full text link
    This paper is concerned with the complexity analysis of constructor term rewrite systems and its ramification in implicit computational complexity. We introduce a path order with multiset status, the polynomial path order POP*, that is applicable in two related, but distinct contexts. On the one hand POP* induces polynomial innermost runtime complexity and hence may serve as a syntactic, and fully automatable, method to analyse the innermost runtime complexity of term rewrite systems. On the other hand POP* provides an order-theoretic characterisation of the polytime computable functions: the polytime computable functions are exactly the functions computable by an orthogonal constructor TRS compatible with POP*.Comment: LMCS version. This article supersedes arXiv:1209.379

    Derivational Complexity and Context-Sensitive Rewriting

    Full text link
    [EN] Context-sensitive rewriting is a restriction of rewriting where reduction steps are allowed on specific arguments mu(f) subset of {1, ..., k} of k-ary function symbols f only. Terms which cannot be further rewritten in this way are called mu-normal forms. For left-linear term rewriting systems (TRSs), the so-called normalization via mu-normalization procedure provides a systematic way to obtain normal forms by the stepwise computation and combination of intermediate mu-normal forms. In this paper, we show how to obtain bounds on the derivational complexity of computations using this procedure by using bounds on the derivational complexity of context-sensitive rewriting. Two main applications are envisaged: Normalization via mu-normalization can be used with non-terminating TRSs where the procedure still terminates; on the other hand, it can be used to improve on bounds of derivational complexity of terminating TRSs as it discards many rewritings.Partially supported by the EU (FEDER), and projects RTI2018-094403-B-C32 and PROMETEO/2019/098.Lucas Alba, S. (2021). Derivational Complexity and Context-Sensitive Rewriting. Journal of Automated Reasoning. 65(8):1191-1229. https://doi.org/10.1007/s10817-021-09603-11191122965

    Proof Theory at Work: Complexity Analysis of Term Rewrite Systems

    Full text link
    This thesis is concerned with investigations into the "complexity of term rewriting systems". Moreover the majority of the presented work deals with the "automation" of such a complexity analysis. The aim of this introduction is to present the main ideas in an easily accessible fashion to make the result presented accessible to the general public. Necessarily some technical points are stated in an over-simplified way.Comment: Cumulative Habilitation Thesis, submitted to the University of Innsbruc

    The exact hardness of deciding derivational and runtime complexity

    Get PDF
    For any class C of computable total functions satisfying some mild conditions, we prove that the following decision problems are complete for the existential part of the second level of the arithmetical hierarchy: (A) Deciding whether a term rewriting system (TRS for short) has runtime complexity bounded by a function in C. (B) Deciding whether a TRS has derivational complexity bounded by a function in C. In particular, the problems of deciding whether a TRS has polynomially (exponentially) bounded runtime complexity (respectively derivational complexity) are complete for this level of the arithmetical ierarchy. This places deciding polynomial derivational or runtime complexity of TRSs at the same level as deciding nontermination or nonconfluence of TRSs. We proceed to show that the related problem of deciding for a single computable function f whether a TRS has runtime complexity bounded from above by f is complete for the universal part of the first level of the arithmetical hierarchy. We further prove that analysing the implicit complexity of TRSs is even more difficult: The problem of deciding whether a TRS accepts a language of terms accepted by some TRS with runtime complexity bounded by a function in C is complete for the existential part of the third level of the arithmetical hierarchy. All of our results are easily extended to the notion of minimal complexity (where the length of shortest reductions to normal form is considered) and remain valid under any computable reduction strategy. Finally, all results hold both for unrestricted TRSs and for the class of orthogonal TRSs

    Synthesis of sup-interpretations: a survey

    Get PDF
    In this paper, we survey the complexity of distinct methods that allow the programmer to synthesize a sup-interpretation, a function providing an upper- bound on the size of the output values computed by a program. It consists in a static space analysis tool without consideration of the time consumption. Although clearly related, sup-interpretation is independent from termination since it only provides an upper bound on the terminating computations. First, we study some undecidable properties of sup-interpretations from a theoretical point of view. Next, we fix term rewriting systems as our computational model and we show that a sup-interpretation can be obtained through the use of a well-known termination technique, the polynomial interpretations. The drawback is that such a method only applies to total functions (strongly normalizing programs). To overcome this problem we also study sup-interpretations through the notion of quasi-interpretation. Quasi-interpretations also suffer from a drawback that lies in the subterm property. This property drastically restricts the shape of the considered functions. Again we overcome this problem by introducing a new notion of interpretations mainly based on the dependency pairs method. We study the decidability and complexity of the sup-interpretation synthesis problem for all these three tools over sets of polynomials. Finally, we take benefit of some previous works on termination and runtime complexity to infer sup-interpretations.Comment: (2012

    Analysing Parallel Complexity of Term Rewriting

    Full text link
    We revisit parallel-innermost term rewriting as a model of parallel computation on inductive data structures and provide a corresponding notion of runtime complexity parametric in the size of the start term. We propose automatic techniques to derive both upper and lower bounds on parallel complexity of rewriting that enable a direct reuse of existing techniques for sequential complexity. The applicability and the precision of the method are demonstrated by the relatively light effort in extending the program analysis tool AProVE and by experiments on numerous benchmarks from the literature.Comment: Extended authors' accepted manuscript for a paper accepted for publication in the Proceedings of the 32nd International Symposium on Logic-based Program Synthesis and Transformation (LOPSTR 2022). 27 page

    On Complexity Bounds and Confluence of Parallel Term Rewriting

    Full text link
    We revisit parallel-innermost term rewriting as a model of parallel computation on inductive data structures and provide a corresponding notion of runtime complexity parametric in the size of the start term. We propose automatic techniques to derive both upper and lower bounds on parallel complexity of rewriting that enable a direct reuse of existing techniques for sequential complexity. Our approach to find lower bounds requires confluence of the parallel-innermost rewrite relation, thus we also provide effective sufficient criteria for proving confluence. The applicability and the precision of the method are demonstrated by the relatively light effort in extending the program analysis tool AProVE and by experiments on numerous benchmarks from the literature.Comment: Under submission to Fundamenta Informaticae. arXiv admin note: substantial text overlap with arXiv:2208.0100
    corecore