337 research outputs found

    Strong normalisation for applied lambda calculi

    Full text link
    We consider the untyped lambda calculus with constructors and recursively defined constants. We construct a domain-theoretic model such that any term not denoting bottom is strongly normalising provided all its `stratified approximations' are. From this we derive a general normalisation theorem for applied typed lambda-calculi: If all constants have a total value, then all typeable terms are strongly normalising. We apply this result to extensions of G\"odel's system T and system F extended by various forms of bar recursion for which strong normalisation was hitherto unknown.Comment: 14 pages, paper acceptet at electronic journal LMC

    An Embedding of the BSS Model of Computation in Light Affine Lambda-Calculus

    Full text link
    This paper brings together two lines of research: implicit characterization of complexity classes by Linear Logic (LL) on the one hand, and computation over an arbitrary ring in the Blum-Shub-Smale (BSS) model on the other. Given a fixed ring structure K we define an extension of Terui's light affine lambda-calculus typed in LAL (Light Affine Logic) with a basic type for K. We show that this calculus captures the polynomial time function class FP(K): every typed term can be evaluated in polynomial time and conversely every polynomial time BSS machine over K can be simulated in this calculus.Comment: 11 pages. A preliminary version appeared as Research Report IAC CNR Roma, N.57 (11/2004), november 200

    Simple Parsimonious Types and Logarithmic Space

    Get PDF
    We present a functional characterization of deterministic logspace-computable predicates based on a variant (although not a subsystem) of propositional linear logic, which we call parsimonious logic. The resulting calculus is simply-typed and contains no primitive besides those provided by the underlying logical system, which makes it one of the simplest higher-order languages capturing logspace currently known. Completeness of the calculus uses the descriptive complexity characterization of logspace (we encode first-order logic with deterministic closure), whereas soundness is established by executing terms on a token machine (using the geometry of interaction)

    Step-Indexed Logical Relations for Probability (long version)

    Full text link
    It is well-known that constructing models of higher-order probabilistic programming languages is challenging. We show how to construct step-indexed logical relations for a probabilistic extension of a higher-order programming language with impredicative polymorphism and recursive types. We show that the resulting logical relation is sound and complete with respect to the contextual preorder and, moreover, that it is convenient for reasoning about concrete program equivalences. Finally, we extend the language with dynamically allocated first-order references and show how to extend the logical relation to this language. We show that the resulting relation remains useful for reasoning about examples involving both state and probabilistic choice.Comment: Extended version with appendix of a FoSSaCS'15 pape

    Static and dynamic semantics of NoSQL languages

    Get PDF
    We present a calculus for processing semistructured data that spans differences of application area among several novel query languages, broadly categorized as "NoSQL". This calculus lets users define their own operators, capturing a wider range of data processing capabilities, whilst providing a typing precision so far typical only of primitive hard-coded operators. The type inference algorithm is based on semantic type checking, resulting in type information that is both precise, and flexible enough to handle structured and semistructured data. We illustrate the use of this calculus by encoding a large fragment of Jaql, including operations and iterators over JSON, embedded SQL expressions, and co-grouping, and show how the encoding directly yields a typing discipline for Jaql as it is, namely without the addition of any type definition or type annotation in the code

    Formalisation of FunLoft

    Get PDF
    We formalise a thread-based concurrent language which makes resource control possible. Concurrency is based on a two-level model: threads are executed cooperatively when linked to a scheduler, and unlinked threads and schedulers are executed preemptively, under the control of the OS. We present a type and effect system to enforce a logical separation of the memory which ensures that (1) when running in preemptive mode, threads do not interfere with other threads; (2) threads linked to a scheduler do not interfere with threads linked to another scheduler. Thus, we get a concurrency model in which well-typed programs are free from data-races. The type system also insures that well-typed programs are bounded in memory and in their use of the CPU. Detection of termination of recursive functions and stratification of references in memory are techniques used to get these properties

    Levity Polymorphism (extended version)

    Get PDF
    Parametric polymorphism is one of the lynchpins of modern typed programming. A function that can work seamlessly over a variety of types simplifies code, helps to avoid errors introduced through duplication, and and is easy to maintain. However, polymorphism comes at a very real cost, one that each language with support for polymorphism has paid in different ways. This paper describes this cost, proposes a theoretically simple way to reason about the cost—that kinds, not types, are calling conventions—and details one approach to dealing with polymorphism that works in the context of a language, Haskell, that prizes both efficiency and a principled type system. This approach, levity polymorphism, allows the user to abstract over calling conventions; we detail and verify restrictions that are necessary in order to compile levity-polymorphic functions. Lev- ity polymorphism has opened up surprising new opportunities for library design in Haskell
    • …
    corecore