96 research outputs found

    Syntactically and semantically regular languages of lambda-terms coincide through logical relations

    Full text link
    A fundamental theme in automata theory is regular languages of words and trees, and their many equivalent definitions. Salvati has proposed a generalization to regular languages of simply typed λ\lambda-terms, defined using denotational semantics in finite sets. We provide here some evidence for its robustness. First, we give an equivalent syntactic characterization that naturally extends the seminal work of Hillebrand and Kanellakis connecting regular languages of words and syntactic λ\lambda-definability. Second, we show that any finitary extensional model of the simply typed λ\lambda-calculus, when used in Salvati's definition, recognizes exactly the same class of languages of λ\lambda-terms as the category of finite sets does. The proofs of these two results rely on logical relations and can be seen as instances of a more general construction of a categorical nature, inspired by previous categorical accounts of logical relations using the gluing construction.Comment: The proofs on "finitely pointable" CCCs in versions 1 and 2 were wrong; we now make slightly weaker claims on well-pointed locally finite CCCs. New in this version: added reference [3] and official DOI (proceedings of CSL 2024

    Heterogeneous substitution systems revisited

    Full text link
    Matthes and Uustalu (TCS 327(1-2):155-174, 2004) presented a categorical description of substitution systems capable of capturing syntax involving binding which is independent of whether the syntax is made up from least or greatest fixed points. We extend this work in two directions: we continue the analysis by creating more categorical structure, in particular by organizing substitution systems into a category and studying its properties, and we develop the proofs of the results of the cited paper and our new ones in UniMath, a recent library of univalent mathematics formalized in the Coq theorem prover.Comment: 24 page

    On the Semantics of Intensionality and Intensional Recursion

    Full text link
    Intensionality is a phenomenon that occurs in logic and computation. In the most general sense, a function is intensional if it operates at a level finer than (extensional) equality. This is a familiar setting for computer scientists, who often study different programs or processes that are interchangeable, i.e. extensionally equal, even though they are not implemented in the same way, so intensionally distinct. Concomitant with intensionality is the phenomenon of intensional recursion, which refers to the ability of a program to have access to its own code. In computability theory, intensional recursion is enabled by Kleene's Second Recursion Theorem. This thesis is concerned with the crafting of a logical toolkit through which these phenomena can be studied. Our main contribution is a framework in which mathematical and computational constructions can be considered either extensionally, i.e. as abstract values, or intensionally, i.e. as fine-grained descriptions of their construction. Once this is achieved, it may be used to analyse intensional recursion.Comment: DPhil thesis, Department of Computer Science & St John's College, University of Oxfor

    A synthetic axiomatization of Map Theory

    Get PDF
    Includes TOC détaillée, index et appendicesInternational audienceThis paper presents a subtantially simplified axiomatization of Map Theory and proves the consistency of this axiomatization in ZFC under the assumption that there exists an inaccessible ordinal. Map Theory axiomatizes lambda calculus plus Hilbert's epsilon operator. All theorems of ZFC set theory including the axiom of foundation are provable in Map Theory, and if one omits Hilbert's epsilon operator from Map Theory then one is left with a computer programming language. Map Theory fulfills Church's original aim of introducing lambda calculus. Map Theory is suited for reasoning about classical mathematics as well ascomputer programs. Furthermore, Map Theory is suited for eliminating thebarrier between classical mathematics and computer science rather than just supporting the two fields side by side. Map Theory axiomatizes a universe of "maps", some of which are "wellfounded". The class of wellfounded maps in Map Theory corresponds to the universe of sets in ZFC. The first version MT0 of Map Theory had axioms which populated the class of wellfounded maps, much like the power set axiom et.al. populates the universe of ZFC. The new axiomatization MT of Map Theory is "synthetic" in the sense that the class of wellfounded maps is defined inside MapTheory rather than being introduced through axioms. In the paper we define the notion of kappa- and kappasigma-expansions and prove that if sigma is the smallest strongly inaccessible cardinal then canonical kappasigma expansions are models of MT (which proves the consistency). Furthermore, in the appendix, we prove that canonical omega-expansions are fully abstract models of the computational part of Map Theory

    Polarised Intermediate Representation of Lambda Calculus with Sums

    Get PDF
    Dec. 2015: see the added footnote on page 7International audienceThe theory of the λ-calculus with extensional sums is more complex than with only pairs and functions. We propose an untyped representation—an intermediate calculus—for the λ-calculus with sums, based on the following principles: 1) Computation is described as the reduction of pairs of an expression and a context; the context must be represented inside-out, 2) Operations are represented abstractly by their transition rule, 3) Positive and negative expressions are respectively eager and lazy; this polarity is an approximation of the type. We offer an introduction from the ground up to our approach, and we review the benefits.A structure of alternating phases naturally emerges through the study of normal forms, offering a reconstruction of focusing. Considering further purity assumption, we obtain maximal multi-focusing. As an application, we can deduce a syntax-directed algorithm to decide the equivalence of normal forms in the simply-typed λ-calculus with sums, and justify it with our intermediate calculus

    Formalising Mathematics in Simple Type Theory

    Get PDF
    Despite the considerable interest in new dependent type theories, simple type theory (which dates from 1940) is sufficient to formalise serious topics in mathematics. This point is seen by examining formal proofs of a theorem about stereographic projections. A formalisation using the HOL Light proof assistant is contrasted with one using Isabelle/HOL. Harrison's technique for formalising Euclidean spaces is contrasted with an approach using Isabelle/HOL's axiomatic type classes. However, every formal system can be outgrown, and mathematics should be formalised with a view that it will eventually migrate to a new formalism

    Optimizing and Incrementalizing Higher-order Collection Queries by AST Transformation

    Get PDF
    In modernen, universellen Programmiersprachen sind Abfragen auf Speicher-basierten Kollektionen oft rechenintensiver als erforderlich. Während Datenbankenabfragen vergleichsweise einfach optimiert werden können, fällt dies bei Speicher-basierten Kollektionen oft schwer, denn universelle Programmiersprachen sind in aller Regel ausdrucksstärker als Datenbanken. Insbesondere unterstützen diese Sprachen meistens verschachtelte, rekursive Datentypen und Funktionen höherer Ordnung. Kollektionsabfragen können per Hand optimiert und inkrementalisiert werden, jedoch verringert dies häufig die Modularität und ist oft zu fehleranfällig, um realisierbar zu sein oder um Instandhaltung von entstandene Programm zu gewährleisten. Die vorliegende Doktorarbeit demonstriert, wie Abfragen auf Kollektionen systematisch und automatisch optimiert und inkrementalisiert werden können, um Programmierer von dieser Last zu befreien. Die so erzeugten Programme werden in derselben Kernsprache ausgedrückt, um weitere Standardoptimierungen zu ermöglichen. Teil I entwickelt eine Variante der Scala API für Kollektionen, die Staging verwendet um Abfragen als abstrakte Syntaxbäume zu reifizieren. Auf Basis dieser Schnittstelle werden anschließend domänenspezifische Optimierungen von Programmiersprachen und Datenbanken angewandt; unter anderem werden Abfragen umgeschrieben, um vom Programmierer ausgewählte Indizes zu benutzen. Dank dieser Indizes kann eine erhebliche Beschleunigung der Ausführungsgeschwindigkeit gezeigt werden; eine experimentelle Auswertung zeigt hierbei Beschleunigungen von durchschnittlich 12x bis zu einem Maximum von 12800x. Um Programme mit Funktionen höherer Ordnung durch Programmtransformation zu inkrementalisieren, wird in Teil II eine Erweiterung der Finite-Differenzen-Methode vorgestellt [Paige and Koenig, 1982; Blakeley et al., 1986; Gupta and Mumick, 1999] und ein erster Ansatz zur Inkrementalisierung durch Programmtransformation für Programme mit Funktionen höherer Ordnung entwickelt. Dabei werden Programme zu Ableitungen transformiert, d.h. zu Programmen die Eingangsdifferenzen in Ausgangdifferenzen umwandeln. Weiterhin werden in den Kapiteln 12–13 die Korrektheit des Inkrementalisierungsansatzes für einfach-getypten und ungetypten λ-Kalkül bewiesen und Erweiterungen zu System F besprochen. Ableitungen müssen oft Ergebnisse der ursprünglichen Programme wiederverwenden. Um eine solche Wiederverwendung zu ermöglichen, erweitert Kapitel 17 die Arbeit von Liu and Teitelbaum [1995] zu Programmen mit Funktionen höherer Ordnung und entwickeln eine Programmtransformation solcher Programme im Cache-Transfer-Stil. Für eine effiziente Inkrementalisierung ist es weiterhin notwendig, passende Grundoperationen auszuwählen und manuell zu inkrementalisieren. Diese Arbeit deckt einen Großteil der wichtigsten Grundoperationen auf Kollektionen ab. Die Durchführung von Fallstudien zeigt deutliche Laufzeitverbesserungen sowohl in Praxis als auch in der asymptotischen Komplexität.In modern programming languages, queries on in-memory collections are often more expensive than needed. While database queries can be readily optimized, it is often not trivial to use them to express collection queries which employ nested data and first-class functions, as enabled by functional programming languages. Collection queries can be optimized and incrementalized by hand, but this reduces modularity, and is often too error-prone to be feasible or to enable maintenance of resulting programs. To free programmers from such burdens, in this thesis we study how to optimize and incrementalize such collection queries. Resulting programs are expressed in the same core language, so that they can be subjected to other standard optimizations. To enable optimizing collection queries which occur inside programs, we develop a staged variant of the Scala collection API that reifies queries as ASTs. On top of this interface, we adapt domain-specific optimizations from the fields of programming languages and databases; among others, we rewrite queries to use indexes chosen by programmers. Thanks to the use of indexes we show significant speedups in our experimental evaluation, with an average of 12x and a maximum of 12800x. To incrementalize higher-order programs by program transformation, we extend finite differencing [Paige and Koenig, 1982; Blakeley et al., 1986; Gupta and Mumick, 1999] and develop the first approach to incrementalization by program transformation for higher-order programs. Base programs are transformed to derivatives, programs that transform input changes to output changes. We prove that our incrementalization approach is correct: We develop the theory underlying incrementalization for simply-typed and untyped λ-calculus, and discuss extensions to System F. Derivatives often need to reuse results produced by base programs: to enable such reuse, we extend work by Liu and Teitelbaum [1995] to higher-order programs, and develop and prove correct a program transformation, converting higher-order programs to cache-transfer-style. For efficient incrementalization, it is necessary to choose and incrementalize by hand appropriate primitive operations. We incrementalize a significant subset of collection operations and perform case studies, showing order-of-magnitude speedups both in practice and in asymptotic complexity

    On polymorphic sessions and functions: A tale of two (fully abstract) encodings

    Get PDF
    This work exploits the logical foundation of session types to determine what kind of type discipline for the -calculus can exactly capture, and is captured by, -calculus behaviours. Leveraging the proof theoretic content of the soundness and completeness of sequent calculus and natural deduction presentations of linear logic, we develop the first mutually inverse and fully abstract processes-as-functions and functions-as-processes encodings between a polymorphic session -calculus and a linear formulation of System F. We are then able to derive results of the session calculus from the theory of the -calculus: (1) we obtain a characterisation of inductive and coinductive session types via their algebraic representations in System F; and (2) we extend our results to account for value and process passing, entailing strong normalisation
    corecore