207 research outputs found

    Degrees of extensionality in the theory of B\"ohm trees and Sall\'e's conjecture

    Full text link
    The main observational equivalences of the untyped lambda-calculus have been characterized in terms of extensional equalities between B\"ohm trees. It is well known that the lambda-theory H*, arising by taking as observables the head normal forms, equates two lambda-terms whenever their B\"ohm trees are equal up to countably many possibly infinite eta-expansions. Similarly, two lambda-terms are equal in Morris's original observational theory H+, generated by considering as observable the beta-normal forms, whenever their B\"ohm trees are equal up to countably many finite eta-expansions. The lambda-calculus also possesses a strong notion of extensionality called "the omega-rule", which has been the subject of many investigations. It is a longstanding open problem whether the equivalence B-omega obtained by closing the theory of B\"ohm trees under the omega-rule is strictly included in H+, as conjectured by Sall\'e in the seventies. In this paper we demonstrate that the two aforementioned theories actually coincide, thus disproving Sall\'e's conjecture. The proof technique we develop for proving the latter inclusion is general enough to provide as a byproduct a new characterization, based on bounded eta-expansions, of the least extensional equality between B\"ohm trees. Together, these results provide a taxonomy of the different degrees of extensionality in the theory of B\"ohm trees

    Categorical combinators

    Get PDF
    Our main aim is to present the connection between λ-calculus and Cartesian closed categories both in an untyped and purely syntactic setting. More specifically we establish a syntactic equivalence theorem between what we call categorical combinatory logic and λ-calculus with explicit products and projections, with β and η-rules as well as with surjective pairing. “Combinatory logic” is of course inspired by Curry's combinatory logic, based on the well-known S, K, I. Our combinatory logic is “categorical” because its combinators and rules are obtained by extracting untyped information from Cartesian closed categories (looking at arrows only, thus forgetting about objects). Compiling λ-calculus into these combinators happens to be natural and provokes only n log n code expansion. Moreover categorical combinatory logic is entirely faithful to β-reduction where combinatory logic needs additional rather complex and unnatural axioms to be. The connection easily extends to the corresponding typed calculi, where typed categorical combinatory logic is a free Cartesian closed category where the notion of terminal object is replaced by the explicit manipulation of applying (a function to its argument) and coupling (arguments to build datas in products). Our syntactic equivalences induce equivalences at the model level. The paper is intended as a mathematical foundation for developing implementations of functional programming languages based on a “categorical abstract machine,” as developed in a companion paper (Cousineau, Curien, and Mauny, in “Proceedings, ACM Conf. on Functional Programming Languages and Computer Architecture,” Nancy, 1985)

    Formal verification of higher-order probabilistic programs

    Full text link
    Probabilistic programming provides a convenient lingua franca for writing succinct and rigorous descriptions of probabilistic models and inference tasks. Several probabilistic programming languages, including Anglican, Church or Hakaru, derive their expressiveness from a powerful combination of continuous distributions, conditioning, and higher-order functions. Although very important for practical applications, these combined features raise fundamental challenges for program semantics and verification. Several recent works offer promising answers to these challenges, but their primary focus is on semantical issues. In this paper, we take a step further and we develop a set of program logics, named PPV, for proving properties of programs written in an expressive probabilistic higher-order language with continuous distributions and operators for conditioning distributions by real-valued functions. Pleasingly, our program logics retain the comfortable reasoning style of informal proofs thanks to carefully selected axiomatizations of key results from probability theory. The versatility of our logics is illustrated through the formal verification of several intricate examples from statistics, probabilistic inference, and machine learning. We further show the expressiveness of our logics by giving sound embeddings of existing logics. In particular, we do this in a parametric way by showing how the semantics idea of (unary and relational) TT-lifting can be internalized in our logics. The soundness of PPV follows by interpreting programs and assertions in quasi-Borel spaces (QBS), a recently proposed variant of Borel spaces with a good structure for interpreting higher order probabilistic programs

    Formally Verified Quantum Programming

    Get PDF
    The field of quantum mechanics predates computer science by at least ten years, the time between the publication of the Schrodinger equation and the Church-Turing thesis. It took another fifty years for Feynman to recognize that harnessing quantum mechanics is necessary to efficiently simulate physics and for David Deutsch to propose the quantum Turing machine. After thirty more years, we are finally getting close to the first general-purpose quantum computers based upon prototypes by IBM, Intel, Google and others. While physicists and engineers have worked on building scalable quantum computers, theoretical computer scientists have made their own advances. Complexity theorists introduced quantum complexity classes like BQP and QMA; Shor and Grover developed their famous algorithms for factoring and unstructured search. Programming languages researchers pursued two main research directions: Small-scale languages like QPL and the quantum lambda-calculi for reasoning about quantum computation and large-scale languages like Quipper and Q# for industrial-scale quantum software development. This thesis aims to unify these two threads while adding a third one: formal verification. We argue that quantum programs demand machine-checkable proofs of correctness. We justify this on the basis of the complexity of programs manipulating quantum states, the expense of running quantum programs, and the inapplicability of traditional debugging techniques to programs whose states cannot be examined. We further argue that the existing mathematical models of quantum computation make this an easier task than one could reasonably expect. In light of these observations we introduce QWIRE, a tool for writing verifiable, large scale quantum programs. QWIRE is not merely a language for writing and verifying quantum circuits: it is a verified circuit description language. This means that the semantics of QWIRE circuits are verified in the Coq proof assistant. We also implement verified abstractions, like ancilla management and reversible circuit compilation. Finally, we turn QWIRE and Coq\u27s abilities outwards, towards verifying popular quantum algorithms like quantum teleportation. We argue that this tool provides a solid foundation for research into quantum programming languages and formal verification going forward

    (Leftmost-Outermost) Beta Reduction is Invariant, Indeed

    Get PDF
    Slot and van Emde Boas' weak invariance thesis states that reasonable machines can simulate each other within a polynomially overhead in time. Is lambda-calculus a reasonable machine? Is there a way to measure the computational complexity of a lambda-term? This paper presents the first complete positive answer to this long-standing problem. Moreover, our answer is completely machine-independent and based over a standard notion in the theory of lambda-calculus: the length of a leftmost-outermost derivation to normal form is an invariant cost model. Such a theorem cannot be proved by directly relating lambda-calculus with Turing machines or random access machines, because of the size explosion problem: there are terms that in a linear number of steps produce an exponentially long output. The first step towards the solution is to shift to a notion of evaluation for which the length and the size of the output are linearly related. This is done by adopting the linear substitution calculus (LSC), a calculus of explicit substitutions modeled after linear logic proof nets and admitting a decomposition of leftmost-outermost derivations with the desired property. Thus, the LSC is invariant with respect to, say, random access machines. The second step is to show that LSC is invariant with respect to the lambda-calculus. The size explosion problem seems to imply that this is not possible: having the same notions of normal form, evaluation in the LSC is exponentially longer than in the lambda-calculus. We solve such an impasse by introducing a new form of shared normal form and shared reduction, deemed useful. Useful evaluation avoids those steps that only unshare the output without contributing to beta-redexes, i.e. the steps that cause the blow-up in size. The main technical contribution of the paper is indeed the definition of useful reductions and the thorough analysis of their properties.Comment: arXiv admin note: substantial text overlap with arXiv:1405.331
    • …
    corecore