33 research outputs found

    Typeful Normalization by Evaluation

    Get PDF
    We present the first typeful implementation of Normalization by Evaluation for the simply typed lambda-calculus with sums and control operators: we guarantee type preservation and eta-long (modulo commuting conversions), beta-normal forms using only Generalized Algebraic Data Types in a general-purpose programming language, here OCaml; and we account for sums and control operators with Continuation-Passing Style. First, we implement the standard NbE algorithm for the implicational fragment in a typeful way that is correct by construction. We then derive its call-by-value continuation-passing counterpart, that maps a lambda-term with sums and call/cc into a CPS term in normal form, which we express in a typed dedicated syntax. Beyond showcasing the expressive power of GADTs, we emphasize that type inference gives a smooth way to re-derive the encodings of the syntax and typing of normal forms in Continuation-Passing Style

    Quantum Programming Made Easy

    Get PDF
    We present IQu, namely a quantum programming language that extends Reynold's Idealized Algol, the paradigmatic core of Algol-like languages. IQu combines imperative programming with high-order features, mediated by a simple type theory. IQu mildly merges its quantum features with the classical programming style that we can experiment through Idealized Algol, the aim being to ease a transition towards the quantum programming world. The proposed extension is done along two main directions. First, IQu makes the access to quantum co-processors by means of quantum stores. Second, IQu includes some support for the direct manipulation of quantum circuits, in accordance with recent trends in the development of quantum programming languages. Finally, we show that IQu is quite effective in expressing well-known quantum algorithms.Comment: In Proceedings Linearity-TLLA 2018, arXiv:1904.0615

    POPLMark reloaded: Mechanizing proofs by logical relations

    Get PDF
    We propose a new collection of benchmark problems in mechanizing the metatheory of programming languages, in order to compare and push the state of the art of proof assistants. In particular, we focus on proofs using logical relations (LRs) and propose establishing strong normalization of a simply typed calculus with a proof by Kripke-style LRs as a benchmark. We give a modern view of this well-understood problem by formulating our LR on well-typed terms. Using this case study, we share some of the lessons learned tackling this problem in different dependently typed proof environments. In particular, we consider the mechanization in Beluga, a proof environment that supports higher-order abstract syntax encodings and contrast it to the development and strategies used in general-purpose proof assistants such as Coq and Agda. The goal of this paper is to engage the community in discussions on what support in proof environments is needed to truly bring mechanized metatheory to the masses and engage said community in the crafting of future benchmarks

    Gardening with the Pythia A Model of Continuity in a Dependent Setting

    Get PDF

    Covering Spaces in Homotopy Type Theory

    Get PDF
    Broadly speaking, algebraic topology consists of associating algebraic structures to topological spaces that give information about their structure. An elementary, but fundamental, example is provided by the theory of covering spaces, which associate groups to covering spaces in such a way that the universal cover corresponds to the fundamental group of the space. One natural question to ask is whether these connections can be stated in homotopy type theory, a new area linking type theory to homotopy theory. In this paper, we give an affirmative answer with a surprisingly concise definition of covering spaces in type theory; we are able to prove various expected properties about the newly defined covering spaces, including the connections with fundamental groups. An additional merit is that our work has been fully mechanized in the proof assistant Agda

    The Tactician (extended version): A Seamless, Interactive Tactic Learner and Prover for Coq

    Full text link
    We present Tactician, a tactic learner and prover for the Coq Proof Assistant. Tactician helps users make tactical proof decisions while they retain control over the general proof strategy. To this end, Tactician learns from previously written tactic scripts and gives users either suggestions about the next tactic to be executed or altogether takes over the burden of proof synthesis. Tactician's goal is to provide users with a seamless, interactive, and intuitive experience together with robust and adaptive proof automation. In this paper, we give an overview of Tactician from the user's point of view, regarding both day-to-day usage and issues of package dependency management while learning in the large. Finally, we give a peek into Tactician's implementation as a Coq plugin and machine learning platform.Comment: 19 pages, 2 figures. This is an extended version of a paper published in CICM-2020. For the project website, see https://coq-tactician.github.i

    (Leftmost-Outermost) Beta Reduction is Invariant, Indeed

    Get PDF
    Slot and van Emde Boas' weak invariance thesis states that reasonable machines can simulate each other within a polynomially overhead in time. Is lambda-calculus a reasonable machine? Is there a way to measure the computational complexity of a lambda-term? This paper presents the first complete positive answer to this long-standing problem. Moreover, our answer is completely machine-independent and based over a standard notion in the theory of lambda-calculus: the length of a leftmost-outermost derivation to normal form is an invariant cost model. Such a theorem cannot be proved by directly relating lambda-calculus with Turing machines or random access machines, because of the size explosion problem: there are terms that in a linear number of steps produce an exponentially long output. The first step towards the solution is to shift to a notion of evaluation for which the length and the size of the output are linearly related. This is done by adopting the linear substitution calculus (LSC), a calculus of explicit substitutions modeled after linear logic proof nets and admitting a decomposition of leftmost-outermost derivations with the desired property. Thus, the LSC is invariant with respect to, say, random access machines. The second step is to show that LSC is invariant with respect to the lambda-calculus. The size explosion problem seems to imply that this is not possible: having the same notions of normal form, evaluation in the LSC is exponentially longer than in the lambda-calculus. We solve such an impasse by introducing a new form of shared normal form and shared reduction, deemed useful. Useful evaluation avoids those steps that only unshare the output without contributing to beta-redexes, i.e. the steps that cause the blow-up in size. The main technical contribution of the paper is indeed the definition of useful reductions and the thorough analysis of their properties.Comment: arXiv admin note: substantial text overlap with arXiv:1405.331

    Recursion Schemes and the WMSO+U Logic

    Get PDF
    We study the weak MSO logic extended by the unbounding quantifier (WMSO+U), expressing the fact that there exist arbitrarily large finite sets satisfying a given property. We prove that it is decidable whether the tree generated by a given higher-order recursion scheme satisfies a given sentence of WMSO+U

    Cubical Syntax for Reflection-Free Extensional Equality

    Get PDF
    We contribute XTT, a cubical reconstruction of Observational Type Theory which extends Martin-L\"of's intensional type theory with a dependent equality type that enjoys function extensionality and a judgmental version of the unicity of identity types principle (UIP): any two elements of the same equality type are judgmentally equal. Moreover, we conjecture that the typing relation can be decided in a practical way. In this paper, we establish an algebraic canonicity theorem using a novel cubical extension (independently proposed by Awodey) of the logical families or categorical gluing argument inspired by Coquand and Shulman: every closed element of boolean type is derivably equal to either 'true' or 'false'.Comment: Extended version; International Conference on Formal Structures for Computation and Deduction (FSCD), 201
    corecore