13 research outputs found

    Normalization by evaluation for call-by-push-value and polarized lambda calculus

    Get PDF
    We observe that normalization by evaluation for simply-typed lambda-calculus with weak coproducts can be carried out in a weak bi-cartesian closed category of presheaves equipped with a monad that allows us to perform case distinction on neutral terms of sum type. The placement of the monad influences the normal forms we obtain: for instance, placing the monad on coproducts gives us eta-long beta-pi normal forms where pi refers to permutation of case distinctions out of elimination positions. We further observe that placing the monad on every coproduct is rather wasteful, and an optimal placement of the monad can be determined by considering polarized simple types inspired by focalization. Polarization classifies types into positive and negative, and it is sufficient to place the monad at the embedding of positive types into negative ones. We consider two calculi based on polarized types: pure call-by-push-value (CBPV) and polarized lambda-calculus, the natural deduction calculus corresponding to focalized sequent calculus. For these two calculi, we present algorithms for normalization by evaluation. We further discuss different implementations of the monad and their relation to existing normalization proofs for lambda-calculus with sums. Our developments have been partially formalized in the Agda proof assistant

    On Irrelevance and Algorithmic Equality in Predicative Type Theory

    Full text link
    Dependently typed programs contain an excessive amount of static terms which are necessary to please the type checker but irrelevant for computation. To separate static and dynamic code, several static analyses and type systems have been put forward. We consider Pfenning's type theory with irrelevant quantification which is compatible with a type-based notion of equality that respects eta-laws. We extend Pfenning's theory to universes and large eliminations and develop its meta-theory. Subject reduction, normalization and consistency are obtained by a Kripke model over the typed equality judgement. Finally, a type-directed equality algorithm is described whose completeness is proven by a second Kripke model.Comment: 36 pages, superseds the FoSSaCS 2011 paper of the first author, titled "Irrelevance in Type Theory with a Heterogeneous Equality Judgement

    Redundancy Elimination for LF

    Get PDF
    AbstractWe present a type system extending the dependent type theory LF, whose terms are more amenable to compact representation. This is achieved by carefully omitting certain subterms which are redundant in the sense that they can be recovered from the types of other subterms. This system is capable of omitting more redundant information than previous work in the same vein, because of its uniform treatment of higher-order and first-order terms. Moreover the ā€˜recipeā€™ for reconstruction of omitted information is encoded directly into annotations on the types in a signature. This brings to light connections between bidirectional (synthesis vs. checking) typing algorithms of the object language on the one hand, and the bidirectional flow of information in the ambient encoding language. The resulting system is a compromise seeking to retain both the effectiveness of full unification-based term reconstruction such as is found in implementation practice, and the logical simplicity of pure LF

    The (In)Efficiency of interaction

    Get PDF
    Evaluating higher-order functional programs through abstract machines inspired by the geometry of the interaction is known to induce space efficiencies, the price being time performances often poorer than those obtainable with traditional, environment-based, abstract machines. Although families of lambda-terms for which the former is exponentially less efficient than the latter do exist, it is currently unknown how general this phenomenon is, and how far the inefficiencies can go, in the worst case. We answer these questions formulating four different well-known abstract machines inside a common definitional framework, this way being able to give sharp results about the relative time efficiencies. We also prove that non-idempotent intersection type theories are able to precisely reflect the time performances of the interactive abstract machine, this way showing that its time-inefficiency ultimately descends from the presence of higher-order types

    Polynomial Time and Dependent Types

    Full text link
    We combine dependent types with linear type systems that soundly and completely capture polynomial time computation. We explore two systems for capturing polynomial time: one system that disallows construction of iterable data, and one, based on the LFPL system of Martin Hofmann, that controls construction via a payment method. Both of these are extended to full dependent types via Quantitative Type Theory, allowing for arbitrary computation in types alongside guaranteed polynomial time computation in terms. We prove the soundness of the systems using a realisability technique due to Dal Lago and Hofmann. Our long-term goal is to combine the extensional reasoning of type theory with intensional reasoning about the resources intrinsically consumed by programs. This paper is a step along this path, which we hope will lead both to practical systems for reasoning about programs' resource usage, and to theoretical use as a form of synthetic computational complexity theory

    Normalisation by Evaluation in the Compilation of Typed Functional Programming Languages

    Get PDF
    This thesis presents a critical analysis of normalisation by evaluation as a technique for speeding up compilation of typed functional programming languages. Our investigation focuses on the SML.NET compiler and its typed intermediate language MIL. We implement and measure the performance of normalisation by evaluation for MIL across a range of benchmarks. Taking a different approach, we also implement and measure the performance of a graph-based shrinking reductions algorithm for SML.NET. MIL is based on Moggiā€™s computational metalanguage. As a stepping stone to normalisation by evaluation, we investigate strong normalisation of the computational metalanguage by introducing an extension of Girard-Tait reducibility. Inspired by previous work on local state and parametric polymorphism, we define reducibility for continuations and more generally reducibility for frame stacks. First we prove strong normalistion for the computational metalanguage. Then we extend that proof to include features of MIL such as sums and exceptions. Taking an incremental approach, we construct a collection of increasingly sophisticated normalisation by evaluation algorithms, culminating in a range of normalisation algorithms for MIL. Congruence rules and alpha-rules are captured by a compositional parameterised semantics. Defunctionalisation is used to eliminate eta-rules. Normalisation by evaluation for the computational metalanguage is introduced using a monadic semantics. Variants in which the monadic effects are made explicit, using either state or control operators, are also considered. Previous implementations of normalisation by evaluation with sums have relied on continuation-passing-syle or control operators. We present a new algorithm which instead uses a single reference cell and a zipper structure. This suggests a possible alternative way of implementing Filinskiā€™s monadic reflection operations. In order to obtain benchmark results without having to take into account all of the features of MIL, we implement two different techniques for eliding language constructs. The first is not semantics-preserving, but is effective for assessing the efficiency of normalisation by evaluation algorithms. The second is semantics-preserving, but less flexible. In common with many intermediate languages, but unlike the computational metalanguage, MIL requires all non-atomic values to be named. We use either control operators or state to ensure each non-atomic value is named. We assess our normalisation by evaluation algorithms by comparing them with a spectrum of progressively more optimised, rewriting-based normalisation algorithms. The SML.NET front-end is used to generate MIL code from ML programs, including the SML.NET compiler itself. Each algorithm is then applied to the generated MIL code. Normalisation by evaluation always performs faster than the most naıve algorithmsā€” often by orders of magnitude. Some of the algorithms are slightly faster than normalisation by evaluation. Closer inspection reveals that these algorithms are in fact defunctionalised versions of normalisation by evaluation algorithms. Our normalisation by evaluation algorithms perform unrestricted inlining of functions. Unrestricted inlining can lead to a super-exponential blow-up in the size of target code with respect to the source. Furthermore, the worst-case complexity of compilation with unrestricted inlining is non-elementary in the size of the source code. SML.NET alleviates both problems by using a restricted form of normalisation based on Appel and Jimā€™s shrinking reductions. The original algorithm is quadratic in the worst case. Using a graph-based representation for terms we implement a compositional linear algorithm. This speeds up the time taken to perform shrinking reductions by up to a factor of fourteen, which leads to an improvement of up to forty percent in total compile time

    Foundations of Information-Flow Control and Effects

    Get PDF
    In programming language research, information-flow control (IFC) is a technique for enforcing a variety of security aspects, such as confidentiality of data,on programs. This Licenciate thesis makes novel contributions to the theory and foundations of IFC in the following ways: Chapter A presents a new proof method for showing the usual desired property of noninterference; Chapter B shows how to securely extend the concurrent IFC language MAC with asynchronous exceptions; and, Chapter C presents a new and simpler language for IFC with effects based on an explicit separation of pure and effectful computations

    Polynomial time and dependent types

    Get PDF
    We combine dependent types with linear type systems that soundly and completely capture polynomial time computation. We explore two systems for capturing polynomial time: one system that disallows construction of iterable data, and one, based on the LFPL system of Martin Hofmann, that controls construction via a payment method. Both of these are extended to full dependent types via Quantitative Type Theory, allowing for arbitrary computation in types alongside guaranteed polynomial time computation in terms. We prove the soundness of the systems using a realisability technique due to Dal Lago and Hofmann. Our long-term goal is to combine the extensional reasoning of type theory with intensional reasoning about the resources intrinsically consumed by programs. This paper is a step along this path, which we hope will lead both to practical systems for reasoning about programsā€™ resource usage, and to theoretical use as a form of synthetic computational complexity theory

    On Induction, Coinduction and Equality in Martin-L\uf6f and Homotopy Type Theory

    Get PDF
    Martin L\uf6f Type Theory, having put computation at the center of logicalreasoning, has been shown to be an effective foundation for proof assistants,with applications both in computer science and constructive mathematics. Oneambition though is for MLTT to also double as a practical general purposeprogramming language. Datatypes in type theory come with an induction orcoinduction principle which gives a precise and concise specification of theirinterface. However, such principles can interfere with how we would like toexpress our programs. In this thesis, we investigate more flexible alternativesto direct uses of the (co)induction principles.As a first contribution, we consider the n-truncation of a type in Homo-topy Type Theory. We derive in HoTT an eliminator into (n+1)-truncatedtypes instead of n-truncated ones, assuming extra conditions on the underlyingfunction.As a second contribution, we improve on type-based criteria for terminationand productivity. By augmenting the types with well-foundedness information,such criteria allow function definitions in a style closer to general recursion.We consider two criteria: guarded types, and sized types.Guarded types introduce a modality ā€laterā€ to guard the availability ofrecursive calls provided by a general fixed-point combinator. In Guarded Cu-bical Type Theory we equip the fixed-point combinator with a propositionalequality to its one-step unfolding, instead of a definitional equality that wouldbreak normalization. The notion of path from Cubical Type Theory allows usto do so without losing canonicity or decidability of conversion.Sized types, on the other hand, explicitly index datatypes with size boundson the height or depth of their elements. The sizes however can get in theway of the reasoning principles we expect. Our approach is to introduce newquantifiers for ā€irrelevantā€ size quantification. We present a type theory withparametric quantifiers where irrelevance arises as a ā€œfree theoremā€. We alsodevelop a conversion checking algorithm for a more specific theory where thenew quantifiers are restricted to sizes.Finally, our third contribution is about the operational semantics of typetheory. For the extensions above we would like to devise a practical conversionchecking algorithm suitable for integration into a proof assistant. We formal-ized the correctness of such an algorithm for a small but challenging corecalculus, proving that conversion is decidable. We expect this development toform a good basis to verify more complex theories.The ideas discussed in this thesis are already influencing the developmentof Agda, a proof assistant based on type theory
    corecore