393 research outputs found

    The involutions-as-principal types/ application-as-unification analogy

    Get PDF
    In 2005, S. Abramsky introduced various universal models of computation based on Affine Combinatory Logic, consisting of partial involutions over a suitable formal language of moves, in order to discuss reversible computation in a game-theoretic setting. We investigate Abramsky\u2019s models from the point of view of the model theory of \u3bb-calculus, focusing on the purely linear and affine fragments of Abramsky\u2019s Combinatory Algebras. Our approach stems from realizing a structural analogy, which had not been hitherto pointed out in the literature, between the partial involution interpreting a combinator and the principal type of that term, with respect to a simple types discipline for \u3bb-calculus. This analogy allows for explaining as unification between principal types the somewhat awkward linear application of involutions arising from Geometry of Interaction (GoI). Our approach provides immediately an answer to the open problem, raised by Abramsky, of characterising those finitely describable partial involutions which are denotations of combinators, in the purely affine fragment. We prove also that the (purely) linear combinatory algebra of partial involutions is a (purely) linear \u3bb-algebra, albeit not a combinatory model, while the (purely) affine combinatory algebra is not. In order to check the complex equations involved in the definition of affine \u3bb-algebra, we implement in Erlang the compilation of \u3bb-terms as involutions, and their execution

    ML with PTIME complexity guarantees

    Get PDF
    Implicit Computational Complexity is a line of research where the possibility to inference a valid property for a program implies that the program runs in particular complexity class. Soft type systems are one of the research threads within the field. We present here a soft type system with ML-like polymorphism that enjoys decidable typechecking, type inference and typability problems and gives polynomial time computational guarantees for the running time of typed programs

    A Structural Approach to Reversible Computation

    Get PDF
    Reversibility is a key issue in the interface between computation and physics, and of growing importance as miniaturization progresses towards its physical limits. Most foundational work on reversible computing to date has focussed on simulations of low-level machine models. By contrast, we develop a more structural approach. We show how high-level functional programs can be mapped compositionally (i.e. in a syntax-directed fashion) into a simple kind of automata which are immediately seen to be reversible. The size of the automaton is linear in the size of the functional term. In mathematical terms, we are building a concrete model of functional computation. This construction stems directly from ideas arising in Geometry of Interaction and Linear Logic---but can be understood without any knowledge of these topics. In fact, it serves as an excellent introduction to them. At the same time, an interesting logical delineation between reversible and irreversible forms of computation emerges from our analysis.Comment: 30 pages, appeared in Theoretical Computer Scienc

    Lincx: A Linear Logical Framework with First-class Contexts

    Get PDF
    Linear logic provides an elegant framework for modelling stateful, imperative and con- current systems by viewing a context of assumptions as a set of resources. However, mech- anizing the meta-theory of such systems remains a challenge, as we need to manage and reason about mixed contexts of linear and intuitionistic assumptions. We present Lincx, a contextual linear logical framework with first-class mixed contexts. Lincx allows us to model (linear) abstract syntax trees as syntactic structures that may depend on intuitionistic and linear assumptions. It can also serve as a foundation for reasoning about such structures. Lincx extends the linear logical framework LLF with first-class (linear) contexts and an equational theory of context joins that can otherwise be very tedious and intricate to develop. This work may be also viewed as a generalization of contextual LF that supports both intuitionistic and linear variables, functions, and assumptions. We describe a decidable type-theoretic foundation for Lincx that only characterizes canonical forms and show that our equational theory of context joins is associative and commu- tative. Finally, we outline how Lincx may serve as a practical foundation for mechanizing the meta-theory of stateful systems.La logique linéaire represente une structure élégante pour modeler des systèmes im- pératifs, concurrents et avec des systèmes a états, en représentant un contexte d'hypothèses comme une collection de ressources. Cependant, la mécanisation de la métathéorie de ces systèmes demeure un défi, puisque nous devons gérer et raisonner à propos de contextes d'hypothèses mixtes linéaires et intuitionistiques. Nous présentons Lincx, une structure logique linéaire et contextuelle avec des contextes mixtes de première classe. Lincx nous permet d'établir des modèles (linéaires) d'arbres de syntaxe abstraits en tant que structures syntactiques qui peuvent dependre d'hypothèses intuitionistiques et linéaires. Lincx peut également servir de fondation pour raisonner à propos de telles structures. Lincx étend la structure logique linéaire LLF avec des contextes (linéaires) de premier ordre et une théorie d'equations d'assemblage de contextes qui peut autrement être très fastidieux et complexe à développer. Cet oeuvre peut également être perçu comme une généralisation du LF contextuel qui supporte les fonctions, les hypothéses et les variables intuitionistiques et linéaires. Nous décrivons une fondation de la théorie des types décidable pour Lincx qui ne décrit que les formes canoniques et montrons que notre theorie d'équations d'assemblage de contextes est associative et commutative. Finalement, nous donnons un aperçu de comment Lincx peut servir de fondation pratique pour la mécanisation de la métathéorie de systèmes à états

    Proof-theoretic investigations into integrated logical and functional programming

    Get PDF
    This thesis is a proof-theoretic investigation of logic programming based on hereditary Harrop logic (as in lambdaProlog). After studying various proof systems for the first-order hereditary Harrop logic, we define the proof-theoretic semantics of a logic LFPL, intended as the basis of logic programming with functions, which extends higher-order hereditary Harrop logic by providing definition mechanisms for functions in such a way that the logical specification of the function rather than the function may be used in proof search. In Chap. 3, we define, for the first-order hereditary Harrop fragment of LJ, the class of uniform linear focused (ULF) proofs (suitable for goal-directed search with backchaining and unification) and show that the ULF-proofs are in 1-1 correspondence with the expanded normal deductions, in Prawitz's sense. We give a system of proof-term annotations for LJ-proofs (where proof-terms uniquely represent proofs). We define a rewriting system on proof-terms (where rules represent a subset of Kleene's permutations in LJ) and show that: its irreducible proof- terms are those representing ULF-proofs; it is weakly normalising. We also show that the composition of Prawitz's mappings between LJ and NJ, restricted to ULF-proofs, is the identity. We take the view of logic programming where: a program P is a set of formulae; a goal G is a formula; and the different means of achieving G w.r.t. P correspond to the expanded normal deductions of G from the assumptions in P (rather than the traditional view, whereby the different means of goal-achievement correspond to the different answer substitutions). LFPL is defined in Chap. 4, by means of a sequent calculus. As in LeFun, it extends logic programming with functions and provides mechanisms for defining names for functions, maintaining proof search as the computation mechanism (contrary to languages such as ALF, Babel, Curry and Escher, based on equational logic, where the computation mechanism is some form of rewriting). LFPL also allows definitions for declaring logical properties of functions, called definitions of dependent type. Such definitions are of the form: (f,x) =def(A, w) : EX:RF, where f is a name for A and x is a name for w, a proof-term witnessing that the formula [A/x]F holds (i.e. A meets the specification Ex:rF). When searching for proofs, it may suffice to use the formula [A/x]F rather than A itself. We present an interpretation of LFPL into NNlambdanorm, a natural deduction system for hereditary Harrop logic with lambda-terms. The means of goal-achievement in LFPL are interpreted in NNlambdanorm essentially by cut-elimination, followed by an interpretation of cut-free sequent calculus proofs as normal deductions. We show that the use of definitions of dependent type may speed up proof search because the equivalent proofs using no such definitions may be much longer and because normalisation may be done lazily, since not all parts of the proof need to be exhibited. We sketch two methods for implementing LFPL, based on goal-directed proof search, differing in the mechanism for selecting definitions of dependent type on which to backchain. We discuss techniques for handling the redundancy arising from the equivalence of each proof using such a definition to one using no such definitions

    Topics in Programming Languages, a Philosophical Analysis through the case of Prolog

    Get PDF
    [EN]Programming languages seldom find proper anchorage in philosophy of logic, language and science. is more, philosophy of language seems to be restricted to natural languages and linguistics, and even philosophy of logic is rarely framed into programming languages topics. The logic programming paradigm and Prolog are, thus, the most adequate paradigm and programming language to work on this subject, combining natural language processing and linguistics, logic programming and constriction methodology on both algorithms and procedures, on an overall philosophizing declarative status. Not only this, but the dimension of the Fifth Generation Computer system related to strong Al wherein Prolog took a major role. and its historical frame in the very crucial dialectic between procedural and declarative paradigms, structuralist and empiricist biases, serves, in exemplar form, to treat straight ahead philosophy of logic, language and science in the contemporaneous age as well. In recounting Prolog's philosophical, mechanical and algorithmic harbingers, the opportunity is open to various routes. We herein shall exemplify some: - the mechanical-computational background explored by Pascal, Leibniz, Boole, Jacquard, Babbage, Konrad Zuse, until reaching to the ACE (Alan Turing) and EDVAC (von Neumann), offering the backbone in computer architecture, and the work of Turing, Church, Gödel, Kleene, von Neumann, Shannon, and others on computability, in parallel lines, throughly studied in detail, permit us to interpret ahead the evolving realm of programming languages. The proper line from lambda-calculus, to the Algol-family, the declarative and procedural split with the C language and Prolog, and the ensuing branching and programming languages explosion and further delimitation, are thereupon inspected as to relate them with the proper syntax, semantics and philosophical élan of logic programming and Prolog

    Modular Inference of Linear Types for Multiplicity-Annotated Arrows

    Full text link
    Bernardy et al. [2018] proposed a linear type system λq\lambda^q_\to as a core type system of Linear Haskell. In the system, linearity is represented by annotated arrow types AmBA \to_m B, where mm denotes the multiplicity of the argument. Thanks to this representation, existing non-linear code typechecks as it is, and newly written linear code can be used with existing non-linear code in many cases. However, little is known about the type inference of λq\lambda^q_\to. Although the Linear Haskell implementation is equipped with type inference, its algorithm has not been formalized, and the implementation often fails to infer principal types, especially for higher-order functions. In this paper, based on OutsideIn(X) [Vytiniotis et al., 2011], we propose an inference system for a rank 1 qualified-typed variant of λq\lambda^q_\to, which infers principal types. A technical challenge in this new setting is to deal with ambiguous types inferred by naive qualified typing. We address this ambiguity issue through quantifier elimination and demonstrate the effectiveness of the approach with examples.Comment: The full version of our paper to appear in ESOP 202

    (Leftmost-Outermost) Beta Reduction is Invariant, Indeed

    Get PDF
    Slot and van Emde Boas' weak invariance thesis states that reasonable machines can simulate each other within a polynomially overhead in time. Is lambda-calculus a reasonable machine? Is there a way to measure the computational complexity of a lambda-term? This paper presents the first complete positive answer to this long-standing problem. Moreover, our answer is completely machine-independent and based over a standard notion in the theory of lambda-calculus: the length of a leftmost-outermost derivation to normal form is an invariant cost model. Such a theorem cannot be proved by directly relating lambda-calculus with Turing machines or random access machines, because of the size explosion problem: there are terms that in a linear number of steps produce an exponentially long output. The first step towards the solution is to shift to a notion of evaluation for which the length and the size of the output are linearly related. This is done by adopting the linear substitution calculus (LSC), a calculus of explicit substitutions modeled after linear logic proof nets and admitting a decomposition of leftmost-outermost derivations with the desired property. Thus, the LSC is invariant with respect to, say, random access machines. The second step is to show that LSC is invariant with respect to the lambda-calculus. The size explosion problem seems to imply that this is not possible: having the same notions of normal form, evaluation in the LSC is exponentially longer than in the lambda-calculus. We solve such an impasse by introducing a new form of shared normal form and shared reduction, deemed useful. Useful evaluation avoids those steps that only unshare the output without contributing to beta-redexes, i.e. the steps that cause the blow-up in size. The main technical contribution of the paper is indeed the definition of useful reductions and the thorough analysis of their properties.Comment: arXiv admin note: substantial text overlap with arXiv:1405.331
    corecore