12 research outputs found
Lifting infinite normal form definitions from term rewriting to term graph rewriting
Infinite normal forms are a way of giving semantics to non-terminating rewrite systems. The notion is a generalization of the Boehm tree in the lambda calculus. It was first introduced in [AB97] to provide semantics for a lambda calculus on terms with letrec. In that paper infinite normal forms were defined directly on the graph rewrit e system. In [Blo01] the framework was improved by defining the infinite normal form of a term graph using the infinite normal form on terms. This approach of lifting the definition makes the non-confluence problems introduced into term graph rewriting by substitution rules much easier to deal with. In this paper, we give a simplified presentation of the latter approach
A Syntactic Model of Mutation and Aliasing
Traditionally, semantic models of imperative languages use an auxiliary
structure which mimics memory. In this way, ownership and other encapsulation
properties need to be reconstructed from the graph structure of such global
memory. We present an alternative "syntactic" model where memory is encoded as
part of the program rather than as a separate resource. This means that
execution can be modelled by just rewriting source code terms, as in semantic
models for functional programs. Formally, this is achieved by the block
construct, introducing local variable declarations, which play the role of
memory when their initializing expressions have been evaluated. In this way, we
obtain a language semantics which directly represents at the syntactic level
constraints on aliasing, allowing simpler reasoning about related properties.
To illustrate this advantage, we consider the issue, widely studied in the
literature, of characterizing an isolated portion of memory, which cannot be
reached through external references. In the syntactic model, closed block
values, called "capsules", provide a simple representation of isolated portions
of memory, and capsules can be safely moved to another location in the memory,
without introducing sharing, by means of "affine' variables. We prove that the
syntactic model can be encoded in the conventional one, hence efficiently
implemented.Comment: In Proceedings DCM 2018 and ITRS 2018 , arXiv:1904.0956
Probabilistic Rewriting: On Normalization, Termination, and Unique Normal Forms
While a mature body of work supports the study of rewriting systems, even
infinitary ones, abstract tools for Probabilistic Rewriting are still limited.
Here, we investigate questions such as uniqueness of the result (unique limit
distribution) and we develop a set of proof techniques to analyze and compare
reduction strategies. The goal is to have tools to support the operational
analysis of probabilistic calculi (such as probabilistic lambda-calculi) whose
evaluation is also non-deterministic, in the sense that different reductions
are possible.
In particular, we investigate how the behavior of different rewrite sequences
starting from the same term compare w.r.t. normal forms, and propose a robust
analogue of the notion of "unique normal form". Our approach is that of
Abstract Rewrite Systems, i.e. we search for general properties of
probabilistic rewriting, which hold independently of the specific structure of
the objects.Comment: Extended version of the paper in FSCD 2019, International Conference
on Formal Structures for Computation and Deductio
Simulation in the Call-by-Need Lambda-Calculus with Letrec, Case, Constructors, and Seq
This paper shows equivalence of several versions of applicative similarity
and contextual approximation, and hence also of applicative bisimilarity and
contextual equivalence, in LR, the deterministic call-by-need lambda calculus
with letrec extended by data constructors, case-expressions and Haskell's
seq-operator. LR models an untyped version of the core language of Haskell. The
use of bisimilarities simplifies equivalence proofs in calculi and opens a way
for more convenient correctness proofs for program transformations. The proof
is by a fully abstract and surjective transfer into a call-by-name calculus,
which is an extension of Abramsky's lazy lambda calculus. In the latter
calculus equivalence of our similarities and contextual approximation can be
shown by Howe's method. Similarity is transferred back to LR on the basis of an
inductively defined similarity. The translation from the call-by-need letrec
calculus into the extended call-by-name lambda calculus is the composition of
two translations. The first translation replaces the call-by-need strategy by a
call-by-name strategy and its correctness is shown by exploiting infinite trees
which emerge by unfolding the letrec expressions. The second translation
encodes letrec-expressions by using multi-fixpoint combinators and its
correctness is shown syntactically by comparing reductions of both calculi. A
further result of this paper is an isomorphism between the mentioned calculi,
which is also an identity on letrec-free expressions.Comment: 50 pages, 11 figure
Compilation of extended recursion in call-by-value functional languages
This paper formalizes and proves correct a compilation scheme for
mutually-recursive definitions in call-by-value functional languages. This
scheme supports a wider range of recursive definitions than previous methods.
We formalize our technique as a translation scheme to a lambda-calculus
featuring in-place update of memory blocks, and prove the translation to be
correct.Comment: 62 pages, uses pi
Modules over monads and operational semantics
This paper is a contribution to the search for efficient and high-level
mathematical tools to specify and reason about (abstract) programming languages
or calculi. Generalising the reduction monads of Ahrens et al., we introduce
transition monads, thus covering new applications such as
lambda-bar-mu-calculus, pi-calculus, Positive GSOS specifications, differential
lambda-calculus, and the big-step, simply-typed, call-by-value lambda-calculus.
Moreover, we design a suitable notion of signature for transition monads
Probabilistic Rewriting and Asymptotic Behaviour: on Termination and Unique Normal Forms
While a mature body of work supports the study of rewriting systems, abstract
tools for Probabilistic Rewriting are still limited. In this paper we study the
question of uniqueness of the result (unique limit distribution), and develop a
set of proof techniques to analyze and compare reduction strategies. The goal
is to have tools to support the operational analysis of probabilistic calculi
(such as probabilistic lambda-calculi) where evaluation allows for different
reduction choices (hence different reduction paths)
Modules over monads and operational semantics (expanded version)
This paper is a contribution to the search for efficient and high-level
mathematical tools to specify and reason about (abstract) programming languages
or calculi. Generalising the reduction monads of Ahrens et al., we introduce
transition monads, thus covering new applications such as
lambda-bar-mu-calculus, pi-calculus, Positive GSOS specifications, differential
lambda-calculus, and the big-step, simply-typed, call-by-value lambda-calculus.
Moreover, we design a suitable notion of signature for transition monads