34,863 research outputs found

    Complexity Hierarchies and Higher-order Cons-free Term Rewriting

    Get PDF
    Constructor rewriting systems are said to be cons-free if, roughly, constructor terms in the right-hand sides of rules are subterms of the left-hand sides; the computational intuition is that rules cannot build new data structures. In programming language research, cons-free languages have been used to characterize hierarchies of computational complexity classes; in term rewriting, cons-free first-order TRSs have been used to characterize the class PTIME. We investigate cons-free higher-order term rewriting systems, the complexity classes they characterize, and how these depend on the type order of the systems. We prove that, for every K ≄\geq 1, left-linear cons-free systems with type order K characterize EK^KTIME if unrestricted evaluation is used (i.e., the system does not have a fixed reduction strategy). The main difference with prior work in implicit complexity is that (i) our results hold for non-orthogonal term rewriting systems with no assumptions on reduction strategy, (ii) we consequently obtain much larger classes for each type order (EK^KTIME versus EXPK−1^{K-1}TIME), and (iii) results for cons-free term rewriting systems have previously only been obtained for K = 1, and with additional syntactic restrictions besides cons-freeness and left-linearity. Our results are among the first implicit characterizations of the hierarchy E = E1^1TIME ⊊\subsetneq E2^2TIME ⊊\subsetneq ... Our work confirms prior results that having full non-determinism (via overlapping rules) does not directly allow for characterization of non-deterministic complexity classes like NE. We also show that non-determinism makes the classes characterized highly sensitive to minor syntactic changes like admitting product types or non-left-linear rules.Comment: extended version of a paper submitted to FSCD 2016. arXiv admin note: substantial text overlap with arXiv:1604.0893

    Tiramisu: A Polyhedral Compiler for Expressing Fast and Portable Code

    Full text link
    This paper introduces Tiramisu, a polyhedral framework designed to generate high performance code for multiple platforms including multicores, GPUs, and distributed machines. Tiramisu introduces a scheduling language with novel extensions to explicitly manage the complexities that arise when targeting these systems. The framework is designed for the areas of image processing, stencils, linear algebra and deep learning. Tiramisu has two main features: it relies on a flexible representation based on the polyhedral model and it has a rich scheduling language allowing fine-grained control of optimizations. Tiramisu uses a four-level intermediate representation that allows full separation between the algorithms, loop transformations, data layouts, and communication. This separation simplifies targeting multiple hardware architectures with the same algorithm. We evaluate Tiramisu by writing a set of image processing, deep learning, and linear algebra benchmarks and compare them with state-of-the-art compilers and hand-tuned libraries. We show that Tiramisu matches or outperforms existing compilers and libraries on different hardware architectures, including multicore CPUs, GPUs, and distributed machines.Comment: arXiv admin note: substantial text overlap with arXiv:1803.0041

    Complexity Hierarchies Beyond Elementary

    Full text link
    We introduce a hierarchy of fast-growing complexity classes and show its suitability for completeness statements of many non elementary problems. This hierarchy allows the classification of many decision problems with a non-elementary complexity, which occur naturally in logic, combinatorics, formal languages, verification, etc., with complexities ranging from simple towers of exponentials to Ackermannian and beyond.Comment: Version 3 is the published version in TOCT 8(1:3), 2016. I will keep updating the catalogue of problems from Section 6 in future revision

    Primordial Evolution in the Finitary Process Soup

    Full text link
    A general and basic model of primordial evolution--a soup of reacting finitary and discrete processes--is employed to identify and analyze fundamental mechanisms that generate and maintain complex structures in prebiotic systems. The processes--Ï”\epsilon-machines as defined in computational mechanics--and their interaction networks both provide well defined notions of structure. This enables us to quantitatively demonstrate hierarchical self-organization in the soup in terms of complexity. We found that replicating processes evolve the strategy of successively building higher levels of organization by autocatalysis. Moreover, this is facilitated by local components that have low structural complexity, but high generality. In effect, the finitary process soup spontaneously evolves a selection pressure that favors such components. In light of the finitary process soup's generality, these results suggest a fundamental law of hierarchical systems: global complexity requires local simplicity.Comment: 7 pages, 10 figures; http://cse.ucdavis.edu/~cmg/compmech/pubs/pefps.ht

    An in-between "implicit" and "explicit" complexity: Automata

    Get PDF
    Implicit Computational Complexity makes two aspects implicit, by manipulating programming languages rather than models of com-putation, and by internalizing the bounds rather than using external measure. We survey how automata theory contributed to complexity with a machine-dependant with implicit bounds model
    • 

    corecore