509 research outputs found

    Relating Church-Style and Curry-Style Subtyping

    Full text link
    Type theories with higher-order subtyping or singleton types are examples of systems where computation rules for variables are affected by type information in the context. A complication for these systems is that bounds declared in the context do not interact well with the logical relation proof of completeness or termination. This paper proposes a natural modification to the type syntax for F-Omega-Sub, adding variable's bound to the variable type constructor, thereby separating the computational behavior of the variable from the context. The algorithm for subtyping in F-Omega-Sub can then be given on types without context or kind information. As a consequence, the metatheory follows the general approach for type systems without computational information in the context, including a simple logical relation definition without Kripke-style indexing by context. This new presentation of the system is shown to be equivalent to the traditional presentation without bounds on the variable type constructor.Comment: In Proceedings ITRS 2010, arXiv:1101.410

    An estimation for the lengths of reduction sequences of the λμρθ\lambda\mu\rho\theta-calculus

    Full text link
    Since it was realized that the Curry-Howard isomorphism can be extended to the case of classical logic as well, several calculi have appeared as candidates for the encodings of proofs in classical logic. One of the most extensively studied among them is the λμ\lambda\mu-calculus of Parigot. In this paper, based on the result of Xi presented for the λ\lambda-calculus Xi, we give an upper bound for the lengths of the reduction sequences in the λμ\lambda\mu-calculus extended with the ρ\rho- and θ\theta-rules. Surprisingly, our results show that the new terms and the new rules do not add to the computational complexity of the calculus despite the fact that μ\mu-abstraction is able to consume an unbounded number of arguments by virtue of the μ\mu-rule

    Decreasing Diagrams for Confluence and Commutation

    Full text link
    Like termination, confluence is a central property of rewrite systems. Unlike for termination, however, there exists no known complexity hierarchy for confluence. In this paper we investigate whether the decreasing diagrams technique can be used to obtain such a hierarchy. The decreasing diagrams technique is one of the strongest and most versatile methods for proving confluence of abstract rewrite systems. It is complete for countable systems, and it has many well-known confluence criteria as corollaries. So what makes decreasing diagrams so powerful? In contrast to other confluence techniques, decreasing diagrams employ a labelling of the steps with labels from a well-founded order in order to conclude confluence of the underlying unlabelled relation. Hence it is natural to ask how the size of the label set influences the strength of the technique. In particular, what class of abstract rewrite systems can be proven confluent using decreasing diagrams restricted to 1 label, 2 labels, 3 labels, and so on? Surprisingly, we find that two labels suffice for proving confluence for every abstract rewrite system having the cofinality property, thus in particular for every confluent, countable system. Secondly, we show that this result stands in sharp contrast to the situation for commutation of rewrite relations, where the hierarchy does not collapse. Thirdly, investigating the possibility of a confluence hierarchy, we determine the first-order (non-)definability of the notion of confluence and related properties, using techniques from finite model theory. We find that in particular Hanf's theorem is fruitful for elegant proofs of undefinability of properties of abstract rewrite systems

    Tarski's influence on computer science

    Full text link
    The influence of Alfred Tarski on computer science was indirect but significant in a number of directions and was in certain respects fundamental. Here surveyed is the work of Tarski on the decision procedure for algebra and geometry, the method of elimination of quantifiers, the semantics of formal languages, modeltheoretic preservation theorems, and algebraic logic; various connections of each with computer science are taken up

    Algebraic and Combinatorial Methods in Computational Complexity

    Get PDF
    At its core, much of Computational Complexity is concerned with combinatorial objects and structures. But it has often proven true that the best way to prove things about these combinatorial objects is by establishing a connection (perhaps approximate) to a more well-behaved algebraic setting. Indeed, many of the deepest and most powerful results in Computational Complexity rely on algebraic proof techniques. The PCP characterization of NP and the Agrawal-Kayal-Saxena polynomial-time primality test are two prominent examples. Recently, there have been some works going in the opposite direction, giving alternative combinatorial proofs for results that were originally proved algebraically. These alternative proofs can yield important improvements because they are closer to the underlying problems and avoid the losses in passing to the algebraic setting. A prominent example is Dinur's proof of the PCP Theorem via gap amplification which yielded short PCPs with only a polylogarithmic length blowup (which had been the focus of significant research effort up to that point). We see here (and in a number of recent works) an exciting interplay between algebraic and combinatorial techniques. This seminar aims to capitalize on recent progress and bring together researchers who are using a diverse array of algebraic and combinatorial methods in a variety of settings

    Multi-Head Finite Automata: Characterizations, Concepts and Open Problems

    Full text link
    Multi-head finite automata were introduced in (Rabin, 1964) and (Rosenberg, 1966). Since that time, a vast literature on computational and descriptional complexity issues on multi-head finite automata documenting the importance of these devices has been developed. Although multi-head finite automata are a simple concept, their computational behavior can be already very complex and leads to undecidable or even non-semi-decidable problems on these devices such as, for example, emptiness, finiteness, universality, equivalence, etc. These strong negative results trigger the study of subclasses and alternative characterizations of multi-head finite automata for a better understanding of the nature of non-recursive trade-offs and, thus, the borderline between decidable and undecidable problems. In the present paper, we tour a fragment of this literature

    Higher-order subtyping and its decidability

    Get PDF
    AbstractWe define the typed lambda calculus Fω∧ (F-omega-meet), a natural generalization of Girard's system Fω (F-omega) with intersection types and bounded polymorphism. A novel aspect of our presentation is the use of term rewriting techniques to present intersection types, which clearly splits the computational semantics (reduction rules) from the syntax (inference rules) of the system. We establish properties such as Church-Rosser for the reduction relation on types and terms, and strong normalization for the reduction on types. We prove that types are preserved by computation (subject reduction), and that the system satisfies the minimal types property. We define algorithms for type checking and subtype checking. The development culminates with the proof of decidability of typing in Fω∧, containing the first proof of decidability of subtyping of a higher-order lambda calculus with subtyping

    A category-like structure of computational paths for parallel reduction (Proof theory and related topics)

    Get PDF
    We introduce a formal system of reduction paths as a category-like structure induced from a digraph. Our motivation behind this work comes from a quantitative analysis of reduction systems based on the perspective of computational cost and computational orbit. From the perspective, we define a formal system of reduction paths for parallel reduction, wherein reduction paths are generated from a quiver by means of three pathoperators. Next, we introduce an equational theory and reduction rules for the reduction paths, and show that the rules on paths are terminating and confluent so that normal paths are obtained. Following the notion of normal paths, a graphical representation of reduction paths is provided. Then we prove that the reduction graph is a plane graph, and unique path and universal common-reduct properties are established. Based on this, a set of transformation rules from a conversion sequence to a reduction path leading to the universal common-reduct is given under a certain strategy. Finally, path matrices are defined as block matrices of adjacency matrices to count reduction orbits
    corecore