280 research outputs found
A Theory of Explicit Substitutions with Safe and Full Composition
Many different systems with explicit substitutions have been proposed to
implement a large class of higher-order languages. Motivations and challenges
that guided the development of such calculi in functional frameworks are
surveyed in the first part of this paper. Then, very simple technology in named
variable-style notation is used to establish a theory of explicit substitutions
for the lambda-calculus which enjoys a whole set of useful properties such as
full composition, simulation of one-step beta-reduction, preservation of
beta-strong normalisation, strong normalisation of typed terms and confluence
on metaterms. Normalisation of related calculi is also discussed.Comment: 29 pages Special Issue: Selected Papers of the Conference
"International Colloquium on Automata, Languages and Programming 2008" edited
by Giuseppe Castagna and Igor Walukiewic
Strong normalization of lambda-Sym-Prop- and lambda-bar-mu-mu-tilde-star- calculi
In this paper we give an arithmetical proof of the strong normalization of
lambda-Sym-Prop of Berardi and Barbanera [1], which can be considered as a
formulae-as-types translation of classical propositional logic in natural
deduction style. Then we give a translation between the
lambda-Sym-Prop-calculus and the lambda-bar-mu-mu-tilde-star-calculus, which is
the implicational part of the lambda-bar-mu-mu-tilde-calculus invented by
Curien and Herbelin [3] extended with negation. In this paper we adapt the
method of David and Nour [4] for proving strong normalization. The novelty in
our proof is the notion of zoom-in sequences of redexes, which leads us
directly to the proof of the main theorem
Asymptotically almost all \lambda-terms are strongly normalizing
We present quantitative analysis of various (syntactic and behavioral)
properties of random \lambda-terms. Our main results are that asymptotically
all the terms are strongly normalizing and that any fixed closed term almost
never appears in a random term. Surprisingly, in combinatory logic (the
translation of the \lambda-calculus into combinators), the result is exactly
opposite. We show that almost all terms are not strongly normalizing. This is
due to the fact that any fixed combinator almost always appears in a random
combinator
Probabilistic Rewriting: On Normalization, Termination, and Unique Normal Forms
While a mature body of work supports the study of rewriting systems, even
infinitary ones, abstract tools for Probabilistic Rewriting are still limited.
Here, we investigate questions such as uniqueness of the result (unique limit
distribution) and we develop a set of proof techniques to analyze and compare
reduction strategies. The goal is to have tools to support the operational
analysis of probabilistic calculi (such as probabilistic lambda-calculi) whose
evaluation is also non-deterministic, in the sense that different reductions
are possible.
In particular, we investigate how the behavior of different rewrite sequences
starting from the same term compare w.r.t. normal forms, and propose a robust
analogue of the notion of "unique normal form". Our approach is that of
Abstract Rewrite Systems, i.e. we search for general properties of
probabilistic rewriting, which hold independently of the specific structure of
the objects.Comment: Extended version of the paper in FSCD 2019, International Conference
on Formal Structures for Computation and Deductio
Probabilistic Rewriting: Normalization, Termination, and Unique Normal Forms
While a mature body of work supports the study of rewriting systems, abstract tools for Probabilistic Rewriting are still limited. We study in this setting questions such as uniqueness of the result (unique limit distribution) and normalizing strategies (is there a strategy to find a result with greatest probability?). The goal is to have tools to analyse the operational properties of probabilistic calculi (such as probabilistic lambda-calculi) whose evaluation is also non-deterministic, in the sense that different reductions are possible
Uniform Proofs of Normalisation and Approximation for Intersection Types
We present intersection type systems in the style of sequent calculus,
modifying the systems that Valentini introduced to prove normalisation
properties without using the reducibility method. Our systems are more natural
than Valentini's ones and equivalent to the usual natural deduction style
systems. We prove the characterisation theorems of strong and weak
normalisation through the proposed systems, and, moreover, the approximation
theorem by means of direct inductive arguments. This provides in a uniform way
proofs of the normalisation and approximation theorems via type systems in
sequent calculus style.Comment: In Proceedings ITRS 2014, arXiv:1503.0437
Comparing Böhm-Like Trees
Extending the infinitary rewriting definition of Böhm-like trees to infinitary Combinatory Reduction Systems (iCRSs), we show that each Böhm-like tree defined by means of infinitary rewriting can also be defined by means of a direct approximant function. In addition, we show that counterexamples exists to the reverse implication
A New Type Assignment for Strongly Normalizable Terms
We consider an operator definable in the intuitionistic theory of monadic predicates and we axiomatize some of its properties in a definitional extension of that monadic logic. The axiomatization lends itself to a natural deduction formulation to which the Curry-Howard isomorphism can be applied. The resulting Church style type system has the property that an untyped term is typable if and only if it is strongly normalizable
Strong normalization from an unusual point of view
AbstractA new complete characterization of β-strong normalization is given, both in the classical and in the lazy λ-calculus, through the notion of potential valuability inside two suitable parametric calculi
- …