4,062 research outputs found
A journey through resource control lambda calculi and explicit substitution using intersection types (an account)
In this paper we invite the reader to a journey through three lambda calculi with resource control: the lambda calculus, the sequent lambda calculus, and the lambda calculus with explicit substitution. All three calculi enable explicit control of resources due to the presence of weakening and contraction operators. Along this journey, we propose intersection type assignment systems for all three resource control calculi. We recognise the need for three kinds of variables all requiring different kinds of intersection types. Our main contribution is the characterisation of strong normalisation of reductions in all three calculi, using the techniques of reducibility, head subject expansion, a combination of well-orders and suitable embeddings of terms
Resource control and intersection types: an intrinsic connection
In this paper we investigate the -calculus, a -calculus
enriched with resource control. Explicit control of resources is enabled by the
presence of erasure and duplication operators, which correspond to thinning and
con-traction rules in the type assignment system. We introduce directly the
class of -terms and we provide a new treatment of substitution by its
decompo-sition into atomic steps. We propose an intersection type assignment
system for -calculus which makes a clear correspondence between three
roles of variables and three kinds of intersection types. Finally, we provide
the characterisation of strong normalisation in -calculus by means of
an in-tersection type assignment system. This process uses typeability of
normal forms, redex subject expansion and reducibility method.Comment: arXiv admin note: substantial text overlap with arXiv:1306.228
Confluence via strong normalisation in an algebraic \lambda-calculus with rewriting
The linear-algebraic lambda-calculus and the algebraic lambda-calculus are
untyped lambda-calculi extended with arbitrary linear combinations of terms.
The former presents the axioms of linear algebra in the form of a rewrite
system, while the latter uses equalities. When given by rewrites, algebraic
lambda-calculi are not confluent unless further restrictions are added. We
provide a type system for the linear-algebraic lambda-calculus enforcing strong
normalisation, which gives back confluence. The type system allows an abstract
interpretation in System F.Comment: In Proceedings LSFA 2011, arXiv:1203.542
Call-by-value non-determinism in a linear logic type discipline
We consider the call-by-value lambda-calculus extended with a may-convergent
non-deterministic choice and a must-convergent parallel composition. Inspired
by recent works on the relational semantics of linear logic and non-idempotent
intersection types, we endow this calculus with a type system based on the
so-called Girard's second translation of intuitionistic logic into linear
logic. We prove that a term is typable if and only if it is converging, and
that its typing tree carries enough information to give a bound on the length
of its lazy call-by-value reduction. Moreover, when the typing tree is minimal,
such a bound becomes the exact length of the reduction
Full Abstraction for the Resource Lambda Calculus with Tests, through Taylor Expansion
We study the semantics of a resource-sensitive extension of the lambda
calculus in a canonical reflexive object of a category of sets and relations, a
relational version of Scott's original model of the pure lambda calculus. This
calculus is related to Boudol's resource calculus and is derived from Ehrhard
and Regnier's differential extension of Linear Logic and of the lambda
calculus. We extend it with new constructions, to be understood as implementing
a very simple exception mechanism, and with a "must" parallel composition.
These new operations allow to associate a context of this calculus with any
point of the model and to prove full abstraction for the finite sub-calculus
where ordinary lambda calculus application is not allowed. The result is then
extended to the full calculus by means of a Taylor Expansion formula. As an
intermediate result we prove that the exception mechanism is not essential in
the finite sub-calculus
A call-by-need lambda-calculus with locally bottom-avoiding choice: context lemma and correctness of transformations
We present a higher-order call-by-need lambda calculus enriched with constructors, case-expressions, recursive letrec-expressions, a seq-operator for sequential evaluation and a non-deterministic operator amb, which is locally bottom-avoiding. We use a small-step operational semantics in form of a normal order reduction. As equational theory we use contextual equivalence, i.e. terms are equal if plugged into an arbitrary program context their termination behaviour is the same. We use a combination of may- as well as must-convergence, which is appropriate for non-deterministic computations. We evolve different proof tools for proving correctness of program transformations. We provide a context lemma for may- as well as must- convergence which restricts the number of contexts that need to be examined for proving contextual equivalence. In combination with so-called complete sets of commuting and forking diagrams we show that all the deterministic reduction rules and also some additional transformations keep contextual equivalence. In contrast to other approaches our syntax as well as semantics does not make use of a heap for sharing expressions. Instead we represent these expressions explicitely via letrec-bindings
Complexity Information Flow in a Multi-threaded Imperative Language
We propose a type system to analyze the time consumed by multi-threaded
imperative programs with a shared global memory, which delineates a class of
safe multi-threaded programs. We demonstrate that a safe multi-threaded program
runs in polynomial time if (i) it is strongly terminating wrt a
non-deterministic scheduling policy or (ii) it terminates wrt a deterministic
and quiet scheduling policy. As a consequence, we also characterize the set of
polynomial time functions. The type system presented is based on the
fundamental notion of data tiering, which is central in implicit computational
complexity. It regulates the information flow in a computation. This aspect is
interesting in that the type system bears a resemblance to typed based
information flow analysis and notions of non-interference. As far as we know,
this is the first characterization by a type system of polynomial time
multi-threaded programs
- …