7,318 research outputs found
Lazy Evaluation and Delimited Control
The call-by-need lambda calculus provides an equational framework for
reasoning syntactically about lazy evaluation. This paper examines its
operational characteristics. By a series of reasoning steps, we systematically
unpack the standard-order reduction relation of the calculus and discover a
novel abstract machine definition which, like the calculus, goes "under
lambdas." We prove that machine evaluation is equivalent to standard-order
evaluation. Unlike traditional abstract machines, delimited control plays a
significant role in the machine's behavior. In particular, the machine replaces
the manipulation of a heap using store-based effects with disciplined
management of the evaluation stack using control-based effects. In short, state
is replaced with control. To further articulate this observation, we present a
simulation of call-by-need in a call-by-value language using delimited control
operations
Probabilistic Operational Semantics for the Lambda Calculus
Probabilistic operational semantics for a nondeterministic extension of pure
lambda calculus is studied. In this semantics, a term evaluates to a (finite or
infinite) distribution of values. Small-step and big-step semantics are both
inductively and coinductively defined. Moreover, small-step and big-step
semantics are shown to produce identical outcomes, both in call-by- value and
in call-by-name. Plotkin's CPS translation is extended to accommodate the
choice operator and shown correct with respect to the operational semantics.
Finally, the expressive power of the obtained system is studied: the calculus
is shown to be sound and complete with respect to computable probability
distributions.Comment: 35 page
Logical relations for coherence of effect subtyping
A coercion semantics of a programming language with subtyping is typically
defined on typing derivations rather than on typing judgments. To avoid
semantic ambiguity, such a semantics is expected to be coherent, i.e.,
independent of the typing derivation for a given typing judgment. In this
article we present heterogeneous, biorthogonal, step-indexed logical relations
for establishing the coherence of coercion semantics of programming languages
with subtyping. To illustrate the effectiveness of the proof method, we develop
a proof of coherence of a type-directed, selective CPS translation from a typed
call-by-value lambda calculus with delimited continuations and control-effect
subtyping. The article is accompanied by a Coq formalization that relies on a
novel shallow embedding of a logic for reasoning about step-indexing
On Role Logic
We present role logic, a notation for describing properties of relational
structures in shape analysis, databases, and knowledge bases. We construct role
logic using the ideas of de Bruijn's notation for lambda calculus, an encoding
of first-order logic in lambda calculus, and a simple rule for implicit
arguments of unary and binary predicates. The unrestricted version of role
logic has the expressive power of first-order logic with transitive closure.
Using a syntactic restriction on role logic formulas, we identify a natural
fragment RL^2 of role logic. We show that the RL^2 fragment has the same
expressive power as two-variable logic with counting C^2 and is therefore
decidable. We present a translation of an imperative language into the
decidable fragment RL^2, which allows compositional verification of programs
that manipulate relational structures. In addition, we show how RL^2 encodes
boolean shape analysis constraints and an expressive description logic.Comment: 20 pages. Our later SAS 2004 result builds on this wor
Semantics of a Typed Algebraic Lambda-Calculus
Algebraic lambda-calculi have been studied in various ways, but their
semantics remain mostly untouched. In this paper we propose a semantic analysis
of a general simply-typed lambda-calculus endowed with a structure of vector
space. We sketch the relation with two established vectorial lambda-calculi.
Then we study the problems arising from the addition of a fixed point
combinator and how to modify the equational theory to solve them. We sketch an
algebraic vectorial PCF and its possible denotational interpretations
Comparing and evaluating extended Lambek calculi
Lambeks Syntactic Calculus, commonly referred to as the Lambek calculus, was
innovative in many ways, notably as a precursor of linear logic. But it also
showed that we could treat our grammatical framework as a logic (as opposed to
a logical theory). However, though it was successful in giving at least a basic
treatment of many linguistic phenomena, it was also clear that a slightly more
expressive logical calculus was needed for many other cases. Therefore, many
extensions and variants of the Lambek calculus have been proposed, since the
eighties and up until the present day. As a result, there is now a large class
of calculi, each with its own empirical successes and theoretical results, but
also each with its own logical primitives. This raises the question: how do we
compare and evaluate these different logical formalisms? To answer this
question, I present two unifying frameworks for these extended Lambek calculi.
Both are proof net calculi with graph contraction criteria. The first calculus
is a very general system: you specify the structure of your sequents and it
gives you the connectives and contractions which correspond to it. The calculus
can be extended with structural rules, which translate directly into graph
rewrite rules. The second calculus is first-order (multiplicative
intuitionistic) linear logic, which turns out to have several other,
independently proposed extensions of the Lambek calculus as fragments. I will
illustrate the use of each calculus in building bridges between analyses
proposed in different frameworks, in highlighting differences and in helping to
identify problems.Comment: Empirical advances in categorial grammars, Aug 2015, Barcelona,
Spain. 201
Initial Semantics for Reduction Rules
We give an algebraic characterization of the syntax and operational semantics
of a class of simply-typed languages, such as the language PCF: we characterize
simply-typed syntax with variable binding and equipped with reduction rules via
a universal property, namely as the initial object of some category of models.
For this purpose, we employ techniques developed in two previous works: in the
first work we model syntactic translations between languages over different
sets of types as initial morphisms in a category of models. In the second work
we characterize untyped syntax with reduction rules as initial object in a
category of models. In the present work, we combine the techniques used earlier
in order to characterize simply-typed syntax with reduction rules as initial
object in a category. The universal property yields an operator which allows to
specify translations---that are semantically faithful by construction---between
languages over possibly different sets of types.
As an example, we upgrade a translation from PCF to the untyped lambda
calculus, given in previous work, to account for reduction in the source and
target. Specifically, we specify a reduction semantics in the source and target
language through suitable rules. By equipping the untyped lambda calculus with
the structure of a model of PCF, initiality yields a translation from PCF to
the lambda calculus, that is faithful with respect to the reduction semantics
specified by the rules.
This paper is an extended version of an article published in the proceedings
of WoLLIC 2012.Comment: Extended version of arXiv:1206.4547, proves a variant of a result of
PhD thesis arXiv:1206.455
Simulation in the Call-by-Need Lambda-Calculus with Letrec, Case, Constructors, and Seq
This paper shows equivalence of several versions of applicative similarity
and contextual approximation, and hence also of applicative bisimilarity and
contextual equivalence, in LR, the deterministic call-by-need lambda calculus
with letrec extended by data constructors, case-expressions and Haskell's
seq-operator. LR models an untyped version of the core language of Haskell. The
use of bisimilarities simplifies equivalence proofs in calculi and opens a way
for more convenient correctness proofs for program transformations. The proof
is by a fully abstract and surjective transfer into a call-by-name calculus,
which is an extension of Abramsky's lazy lambda calculus. In the latter
calculus equivalence of our similarities and contextual approximation can be
shown by Howe's method. Similarity is transferred back to LR on the basis of an
inductively defined similarity. The translation from the call-by-need letrec
calculus into the extended call-by-name lambda calculus is the composition of
two translations. The first translation replaces the call-by-need strategy by a
call-by-name strategy and its correctness is shown by exploiting infinite trees
which emerge by unfolding the letrec expressions. The second translation
encodes letrec-expressions by using multi-fixpoint combinators and its
correctness is shown syntactically by comparing reductions of both calculi. A
further result of this paper is an isomorphism between the mentioned calculi,
which is also an identity on letrec-free expressions.Comment: 50 pages, 11 figure
- …