11 research outputs found
Interaction Graphs: Full Linear Logic
Interaction graphs were introduced as a general, uniform, construction of dynamic models of linear logic, encompassing all Geometry of Interaction (GoI) constructions introduced so far. This series of work was inspired from Girard's hyperfinite GoI, and develops a quantitative approach that should be understood as a dynamic version of weighted relational models. Until now, the interaction graphs framework has been shown to deal with exponentials for the constrained system ELL (Elementary Linear Logic) while keeping its quantitative aspect. Adapting older constructions by Girard, one can clearly define "full" exponentials, but at the cost of these quantitative features. We show here that allowing interpretations of proofs to use continuous (yet finite in a measure-theoretic sense) sets of states, as opposed to earlier Interaction Graphs constructions were these sets of states were discrete (and finite), provides a model for full linear logic with second order quantification
An Abstract Approach to Stratification in Linear Logic
We study the notion of stratification, as used in subsystems of linear logic
with low complexity bounds on the cut-elimination procedure (the so-called
light logics), from an abstract point of view, introducing a logical system in
which stratification is handled by a separate modality. This modality, which is
a generalization of the paragraph modality of Girard's light linear logic,
arises from a general categorical construction applicable to all models of
linear logic. We thus learn that stratification may be formulated independently
of exponential modalities; when it is forced to be connected to exponential
modalities, it yields interesting complexity properties. In particular, from
our analysis stem three alternative reformulations of Baillot and Mazza's
linear logic by levels: one geometric, one interactive, and one semantic
Estimation of the length of interactions in arena game semantics
We estimate the maximal length of interactions between strategies in HO/N
game semantics, in the spirit of the work by Schwichtenberg and Beckmann for
the length of reduction in simply typed lambdacalculus. Because of the
operational content of game semantics, the bounds presented here also apply to
head linear reduction on lambda-terms and to the execution of programs by
abstract machines (PAM/KAM), including in presence of computational effects
such as non-determinism or ground type references. The proof proceeds by
extracting from the games model a combinatorial rewriting rule on trees of
natural numbers, which can then be analyzed independently of game semantics or
lambda-calculus.Comment: Foundations of Software Science and Computational Structures 14th
International Conference, FOSSACS 2011, Saarbr\"ucken : Germany (2011
(Leftmost-Outermost) Beta Reduction is Invariant, Indeed
Slot and van Emde Boas' weak invariance thesis states that reasonable
machines can simulate each other within a polynomially overhead in time. Is
lambda-calculus a reasonable machine? Is there a way to measure the
computational complexity of a lambda-term? This paper presents the first
complete positive answer to this long-standing problem. Moreover, our answer is
completely machine-independent and based over a standard notion in the theory
of lambda-calculus: the length of a leftmost-outermost derivation to normal
form is an invariant cost model. Such a theorem cannot be proved by directly
relating lambda-calculus with Turing machines or random access machines,
because of the size explosion problem: there are terms that in a linear number
of steps produce an exponentially long output. The first step towards the
solution is to shift to a notion of evaluation for which the length and the
size of the output are linearly related. This is done by adopting the linear
substitution calculus (LSC), a calculus of explicit substitutions modeled after
linear logic proof nets and admitting a decomposition of leftmost-outermost
derivations with the desired property. Thus, the LSC is invariant with respect
to, say, random access machines. The second step is to show that LSC is
invariant with respect to the lambda-calculus. The size explosion problem seems
to imply that this is not possible: having the same notions of normal form,
evaluation in the LSC is exponentially longer than in the lambda-calculus. We
solve such an impasse by introducing a new form of shared normal form and
shared reduction, deemed useful. Useful evaluation avoids those steps that only
unshare the output without contributing to beta-redexes, i.e. the steps that
cause the blow-up in size. The main technical contribution of the paper is
indeed the definition of useful reductions and the thorough analysis of their
properties.Comment: arXiv admin note: substantial text overlap with arXiv:1405.331
A semantic measure of the execution time in linear logic
AbstractWe give a semantic account of the execution time (i.e. the number of cut elimination steps leading to the normal form) of an untyped MELL net. We first prove that: (1) a net is head-normalizable (i.e. normalizable at depth 0) if and only if its interpretation in the multiset based relational semantics is not empty and (2) a net is normalizable if and only if its exhaustive interpretation (a suitable restriction of its interpretation) is not empty. We then give a semantic measure of execution time: we prove that we can compute the number of cut elimination steps leading to a cut free normal form of the net obtained by connecting two cut free nets by means of a cut-link, from the interpretations of the two cut free nets. These results are inspired by similar ones obtained by the first author for the untyped lambda-calculus
Quantitative Game Semantics for Linear Logic
We present a game-based semantic framework into which the time complexity of any IMELL proof can be read out of its interpretation. This gives a compositional view of the geometry of interaction framework introduced by the first author. In our model the time measure is given by means of slots, as introduced by Ghica in a recent paper. The cost associated to a strategy is polynomially related to the normalization time of the interpreted proof, in the style of a complexity-theoretical full abstraction result