881 research outputs found

    12th International Workshop on Termination (WST 2012) : WST 2012, February 19–23, 2012, Obergurgl, Austria / ed. by Georg Moser

    Get PDF
    This volume contains the proceedings of the 12th International Workshop on Termination (WST 2012), to be held February 19–23, 2012 in Obergurgl, Austria. The goal of the Workshop on Termination is to be a venue for presentation and discussion of all topics in and around termination. In this way, the workshop tries to bridge the gaps between different communities interested and active in research in and around termination. The 12th International Workshop on Termination in Obergurgl continues the successful workshops held in St. Andrews (1993), La Bresse (1995), Ede (1997), Dagstuhl (1999), Utrecht (2001), Valencia (2003), Aachen (2004), Seattle (2006), Paris (2007), Leipzig (2009), and Edinburgh (2010). The 12th International Workshop on Termination did welcome contributions on all aspects of termination and complexity analysis. Contributions from the imperative, constraint, functional, and logic programming communities, and papers investigating applications of complexity or termination (for example in program transformation or theorem proving) were particularly welcome. We did receive 18 submissions which all were accepted. Each paper was assigned two reviewers. In addition to these 18 contributed talks, WST 2012, hosts three invited talks by Alexander Krauss, Martin Hofmann, and Fausto Spoto

    Polynomial Path Orders

    Full text link
    This paper is concerned with the complexity analysis of constructor term rewrite systems and its ramification in implicit computational complexity. We introduce a path order with multiset status, the polynomial path order POP*, that is applicable in two related, but distinct contexts. On the one hand POP* induces polynomial innermost runtime complexity and hence may serve as a syntactic, and fully automatable, method to analyse the innermost runtime complexity of term rewrite systems. On the other hand POP* provides an order-theoretic characterisation of the polytime computable functions: the polytime computable functions are exactly the functions computable by an orthogonal constructor TRS compatible with POP*.Comment: LMCS version. This article supersedes arXiv:1209.379

    A Combination Framework for Complexity

    Get PDF
    In this paper we present a combination framework for polynomial complexity analysis of term rewrite systems. The framework covers both derivational and runtime complexity analysis. We present generalisations of powerful complexity techniques, notably a generalisation of complexity pairs and (weak) dependency pairs. Finally, we also present a novel technique, called dependency graph decomposition, that in the dependency pair setting greatly increases modularity. We employ the framework in the automated complexity tool TCT. TCT implements a majority of the techniques found in the literature, witnessing that our framework is general enough to capture a very brought setting

    Tyrolean Complexity Tool: Features and Usage

    Get PDF
    The Tyrolean Complexity Tool, TCT for short, is an open source complexity analyser for term rewrite systems. Our tool TCT features a majority of the known techniques for the automated characterisation of polynomial complexity of rewrite systems and can investigate derivational and runtime complexity, for full and innermost rewriting. This system description outlines features and provides a short introduction to the usage of TCT

    Polynomial Path Orders: A Maximal Model

    Full text link
    This paper is concerned with the automated complexity analysis of term rewrite systems (TRSs for short) and the ramification of these in implicit computational complexity theory (ICC for short). We introduce a novel path order with multiset status, the polynomial path order POP*. Essentially relying on the principle of predicative recursion as proposed by Bellantoni and Cook, its distinct feature is the tight control of resources on compatible TRSs: The (innermost) runtime complexity of compatible TRSs is polynomially bounded. We have implemented the technique, as underpinned by our experimental evidence our approach to the automated runtime complexity analysis is not only feasible, but compared to existing methods incredibly fast. As an application in the context of ICC we provide an order-theoretic characterisation of the polytime computable functions. To be precise, the polytime computable functions are exactly the functions computable by an orthogonal constructor TRS compatible with POP*

    On Sharing, Memoization, and Polynomial Time (Long Version)

    Get PDF
    We study how the adoption of an evaluation mechanism with sharing and memoization impacts the class of functions which can be computed in polynomial time. We first show how a natural cost model in which lookup for an already computed value has no cost is indeed invariant. As a corollary, we then prove that the most general notion of ramified recurrence is sound for polynomial time, this way settling an open problem in implicit computational complexity

    Analogical transfer of verification proofs for state-based specifications

    Get PDF
    The amount of user interaction is the prime cause of costs in interactive program verification. This paper describes an internal analogy technique that reuses subproofs in the verification of state-based specifications. It identifies common patterns of subproofs and their justifications in order reuse these subproofs; thus significant savings on the number of user interactions in a verification proof are achievable

    On Constructor Rewrite Systems and the Lambda Calculus

    Full text link
    We prove that orthogonal constructor term rewrite systems and lambda-calculus with weak (i.e., no reduction is allowed under the scope of a lambda-abstraction) call-by-value reduction can simulate each other with a linear overhead. In particular, weak call-by- value beta-reduction can be simulated by an orthogonal constructor term rewrite system in the same number of reduction steps. Conversely, each reduction in a term rewrite system can be simulated by a constant number of beta-reduction steps. This is relevant to implicit computational complexity, because the number of beta steps to normal form is polynomially related to the actual cost (that is, as performed on a Turing machine) of normalization, under weak call-by-value reduction. Orthogonal constructor term rewrite systems and lambda-calculus are thus both polynomially related to Turing machines, taking as notion of cost their natural parameters.Comment: 27 pages. arXiv admin note: substantial text overlap with arXiv:0904.412
    corecore