50 research outputs found
A finite simulation method in a non-deterministic call-by-need calculus with letrec, constructors and case
The paper proposes a variation of simulation for checking and proving contextual equivalence in a non-deterministic call-by-need lambda-calculus with constructors, case, seq, and a letrec with cyclic dependencies. It also proposes a novel method to prove its correctness. The calculus' semantics is based on a small-step rewrite semantics and on may-convergence. The cyclic nature of letrec bindings, as well as non-determinism, makes known approaches to prove that simulation implies contextual equivalence, such as Howe's proof technique, inapplicable in this setting. The basic technique for the simulation as well as the correctness proof is called pre-evaluation, which computes a set of answers for every closed expression. If simulation succeeds in finite computation depth, then it is guaranteed to show contextual preorder of expressions
Simulation in the Call-by-Need Lambda-Calculus with Letrec, Case, Constructors, and Seq
This paper shows equivalence of several versions of applicative similarity
and contextual approximation, and hence also of applicative bisimilarity and
contextual equivalence, in LR, the deterministic call-by-need lambda calculus
with letrec extended by data constructors, case-expressions and Haskell's
seq-operator. LR models an untyped version of the core language of Haskell. The
use of bisimilarities simplifies equivalence proofs in calculi and opens a way
for more convenient correctness proofs for program transformations. The proof
is by a fully abstract and surjective transfer into a call-by-name calculus,
which is an extension of Abramsky's lazy lambda calculus. In the latter
calculus equivalence of our similarities and contextual approximation can be
shown by Howe's method. Similarity is transferred back to LR on the basis of an
inductively defined similarity. The translation from the call-by-need letrec
calculus into the extended call-by-name lambda calculus is the composition of
two translations. The first translation replaces the call-by-need strategy by a
call-by-name strategy and its correctness is shown by exploiting infinite trees
which emerge by unfolding the letrec expressions. The second translation
encodes letrec-expressions by using multi-fixpoint combinators and its
correctness is shown syntactically by comparing reductions of both calculi. A
further result of this paper is an isomorphism between the mentioned calculi,
which is also an identity on letrec-free expressions.Comment: 50 pages, 11 figure
FICS 2010
International audienceInformal proceedings of the 7th workshop on Fixed Points in Computer Science (FICS 2010), held in Brno, 21-22 August 201
On coinduction and quantum lambda calculi
© Yuxin Deng, Yuan Feng, and Ugo Dal Lago; licensed under Creative Commons License CC-BY. In the ubiquitous presence of linear resources in quantum computation, program equivalence in linear contexts, where programs are used or executed once, is more important than in the classical setting. We introduce a linear contextual equivalence and two notions of bisimilarity, a state-based and a distribution-based, as proof techniques for reasoning about higher-order quantum programs. Both notions of bisimilarity are sound with respect to the linear contextual equivalence, but only the distribution-based one turns out to be complete. The completeness proof relies on a characterisation of the bisimilarity as a testing equivalence
Executable Structural Operational Semantics in Maude
This paper describes in detail how to bridge the gap between theory and practice when implementing
in Maude structural operational semantics described in rewriting logic, where transitions
become rewrites and inference rules become conditional rewrite rules with rewrites in the conditions,
as made possible by the new features in Maude 2.0. We validate this technique using it in
several case studies: a functional language Fpl (evaluation and computation semantics, including
an abstract machine), imperative languages WhileL (evaluation and computation semantics) and
GuardL with nondeterminism (computation semantics), Kahnâs functional language Mini-ML (evaluation
or natural semantics), Milnerâs CCS (with strong and weak transitions), and Full LOTOS
(including ACT ONE data type specifications). In addition, on top of CCS we develop an implementation
of the Hennessy-Milner modal logic for describing local capabilities of processes, and
for LOTOS we build an entire tool where Full LOTOS specifications can be entered and executed
(without user knowledge of the underlying implementation of the semantics). We also compare this
method based on transitions as rewrites with another one based on transitions as judgements
Compilation of extended recursion in call-by-value functional languages
This paper formalizes and proves correct a compilation scheme for
mutually-recursive definitions in call-by-value functional languages. This
scheme supports a wider range of recursive definitions than previous methods.
We formalize our technique as a translation scheme to a lambda-calculus
featuring in-place update of memory blocks, and prove the translation to be
correct.Comment: 62 pages, uses pi
Type Safe Extensible Programming
Software products evolve over time. Sometimes they evolve by adding new
features, and sometimes by either fixing bugs or replacing outdated
implementations with new ones. When software engineers fail to anticipate such
evolution during development, they will eventually be forced to re-architect or
re-build from scratch. Therefore, it has been common practice to prepare for
changes so that software products are extensible over their lifetimes. However,
making software extensible is challenging because it is difficult to anticipate
successive changes and to provide adequate abstraction mechanisms over
potential changes. Such extensibility mechanisms, furthermore, should not
compromise any existing functionality during extension. Software engineers
would benefit from a tool that provides a way to add extensions in a reliable
way. It is natural to expect programming languages to serve this role.
Extensible programming is one effort to address these issues.
In this thesis, we present type safe extensible programming using the MLPolyR
language. MLPolyR is an ML-like functional language whose type system provides
type-safe extensibility mechanisms at several levels. After presenting the
language, we will show how these extensibility mechanisms can be put to good
use in the context of product line engineering. Product line engineering is an
emerging software engineering paradigm that aims to manage variations, which
originate from successive changes in software.Comment: PhD Thesis submitted October, 200