3,710 research outputs found
On Constructor Rewrite Systems and the Lambda Calculus
We prove that orthogonal constructor term rewrite systems and lambda-calculus
with weak (i.e., no reduction is allowed under the scope of a
lambda-abstraction) call-by-value reduction can simulate each other with a
linear overhead. In particular, weak call-by- value beta-reduction can be
simulated by an orthogonal constructor term rewrite system in the same number
of reduction steps. Conversely, each reduction in a term rewrite system can be
simulated by a constant number of beta-reduction steps. This is relevant to
implicit computational complexity, because the number of beta steps to normal
form is polynomially related to the actual cost (that is, as performed on a
Turing machine) of normalization, under weak call-by-value reduction.
Orthogonal constructor term rewrite systems and lambda-calculus are thus both
polynomially related to Turing machines, taking as notion of cost their natural
parameters.Comment: 27 pages. arXiv admin note: substantial text overlap with
arXiv:0904.412
(Leftmost-Outermost) Beta Reduction is Invariant, Indeed
Slot and van Emde Boas' weak invariance thesis states that reasonable
machines can simulate each other within a polynomially overhead in time. Is
lambda-calculus a reasonable machine? Is there a way to measure the
computational complexity of a lambda-term? This paper presents the first
complete positive answer to this long-standing problem. Moreover, our answer is
completely machine-independent and based over a standard notion in the theory
of lambda-calculus: the length of a leftmost-outermost derivation to normal
form is an invariant cost model. Such a theorem cannot be proved by directly
relating lambda-calculus with Turing machines or random access machines,
because of the size explosion problem: there are terms that in a linear number
of steps produce an exponentially long output. The first step towards the
solution is to shift to a notion of evaluation for which the length and the
size of the output are linearly related. This is done by adopting the linear
substitution calculus (LSC), a calculus of explicit substitutions modeled after
linear logic proof nets and admitting a decomposition of leftmost-outermost
derivations with the desired property. Thus, the LSC is invariant with respect
to, say, random access machines. The second step is to show that LSC is
invariant with respect to the lambda-calculus. The size explosion problem seems
to imply that this is not possible: having the same notions of normal form,
evaluation in the LSC is exponentially longer than in the lambda-calculus. We
solve such an impasse by introducing a new form of shared normal form and
shared reduction, deemed useful. Useful evaluation avoids those steps that only
unshare the output without contributing to beta-redexes, i.e. the steps that
cause the blow-up in size. The main technical contribution of the paper is
indeed the definition of useful reductions and the thorough analysis of their
properties.Comment: arXiv admin note: substantial text overlap with arXiv:1405.331
An Invariant Cost Model for the Lambda Calculus
We define a new cost model for the call-by-value lambda-calculus satisfying
the invariance thesis. That is, under the proposed cost model, Turing machines
and the call-by-value lambda-calculus can simulate each other within a
polynomial time overhead. The model only relies on combinatorial properties of
usual beta-reduction, without any reference to a specific machine or evaluator.
In particular, the cost of a single beta reduction is proportional to the
difference between the size of the redex and the size of the reduct. In this
way, the total cost of normalizing a lambda term will take into account the
size of all intermediate results (as well as the number of steps to normal
form).Comment: 19 page
Beta Reduction is Invariant, Indeed (Long Version)
Slot and van Emde Boas' weak invariance thesis states that reasonable
machines can simulate each other within a polynomially overhead in time. Is
-calculus a reasonable machine? Is there a way to measure the
computational complexity of a -term? This paper presents the first
complete positive answer to this long-standing problem. Moreover, our answer is
completely machine-independent and based over a standard notion in the theory
of -calculus: the length of a leftmost-outermost derivation to normal
form is an invariant cost model. Such a theorem cannot be proved by directly
relating -calculus with Turing machines or random access machines,
because of the size explosion problem: there are terms that in a linear number
of steps produce an exponentially long output. The first step towards the
solution is to shift to a notion of evaluation for which the length and the
size of the output are linearly related. This is done by adopting the linear
substitution calculus (LSC), a calculus of explicit substitutions modelled
after linear logic and proof-nets and admitting a decomposition of
leftmost-outermost derivations with the desired property. Thus, the LSC is
invariant with respect to, say, random access machines. The second step is to
show that LSC is invariant with respect to the -calculus. The size
explosion problem seems to imply that this is not possible: having the same
notions of normal form, evaluation in the LSC is exponentially longer than in
the -calculus. We solve such an impasse by introducing a new form of
shared normal form and shared reduction, deemed useful. Useful evaluation
avoids those steps that only unshare the output without contributing to
-redexes, i.e., the steps that cause the blow-up in size.Comment: 29 page
Control Flow Analysis for SF Combinator Calculus
Programs that transform other programs often require access to the internal
structure of the program to be transformed. This is at odds with the usual
extensional view of functional programming, as embodied by the lambda calculus
and SK combinator calculus. The recently-developed SF combinator calculus
offers an alternative, intensional model of computation that may serve as a
foundation for developing principled languages in which to express intensional
computation, including program transformation. Until now there have been no
static analyses for reasoning about or verifying programs written in
SF-calculus. We take the first step towards remedying this by developing a
formulation of the popular control flow analysis 0CFA for SK-calculus and
extending it to support SF-calculus. We prove its correctness and demonstrate
that the analysis is invariant under the usual translation from SK-calculus
into SF-calculus.Comment: In Proceedings VPT 2015, arXiv:1512.0221
On the Relative Usefulness of Fireballs
In CSL-LICS 2014, Accattoli and Dal Lago showed that there is an
implementation of the ordinary (i.e. strong, pure, call-by-name)
-calculus into models like RAM machines which is polynomial in the
number of -steps, answering a long-standing question. The key ingredient
was the use of a calculus with useful sharing, a new notion whose complexity
was shown to be polynomial, but whose implementation was not explored. This
paper, meant to be complementary, studies useful sharing in a call-by-value
scenario and from a practical point of view. We introduce the Fireball
Calculus, a natural extension of call-by-value to open terms for which the
problem is as hard as for the ordinary lambda-calculus. We present three
results. First, we adapt the solution of Accattoli and Dal Lago, improving the
meta-theory of useful sharing. Then, we refine the picture by introducing the
GLAMoUr, a simple abstract machine implementing the Fireball Calculus extended
with useful sharing. Its key feature is that usefulness of a step is
tested---surprisingly---in constant time. Third, we provide a further
optimization that leads to an implementation having only a linear overhead with
respect to the number of -steps.Comment: Technical report for the LICS 2015 submission with the same titl
The Weak Call-By-Value {\lambda}-Calculus is Reasonable for Both Time and Space
We study the weak call-by-value -calculus as a model for
computational complexity theory and establish the natural measures for time and
space -- the number of beta-reductions and the size of the largest term in a
computation -- as reasonable measures with respect to the invariance thesis of
Slot and van Emde Boas [STOC~84]. More precisely, we show that, using those
measures, Turing machines and the weak call-by-value -calculus can
simulate each other within a polynomial overhead in time and a constant factor
overhead in space for all computations that terminate in (encodings) of 'true'
or 'false'. We consider this result as a solution to the long-standing open
problem, explicitly posed by Accattoli [ENTCS~18], of whether the natural
measures for time and space of the -calculus are reasonable, at least
in case of weak call-by-value evaluation.
Our proof relies on a hybrid of two simulation strategies of reductions in
the weak call-by-value -calculus by Turing machines, both of which are
insufficient if taken alone. The first strategy is the most naive one in the
sense that a reduction sequence is simulated precisely as given by the
reduction rules; in particular, all substitutions are executed immediately.
This simulation runs within a constant overhead in space, but the overhead in
time might be exponential. The second strategy is heap-based and relies on
structure sharing, similar to existing compilers of eager functional languages.
This strategy only has a polynomial overhead in time, but the space consumption
might require an additional factor of , which is essentially due to the
size of the pointers required for this strategy. Our main contribution is the
construction and verification of a space-aware interleaving of the two
strategies, which is shown to yield both a constant overhead in space and a
polynomial overhead in time
The weak call-by-value λ-calculus is reasonable for both time and space
We study the weak call-by-value -calculus as a model for computational complexity theory and establish the
natural measures for time and space Ð the number of beta-reduction steps and the size of the largest term
in a computation Ð as reasonable measures with respect to the invariance thesis of Slot and van Emde Boas
from 1984. More precisely, we show that, using those measures, Turing machines and the weak call-by-value
-calculus can simulate each other within a polynomial overhead in time and a constant factor overhead in
space for all computations terminating in (encodings of) łtruež or łfalsež. The simulation yields that standard
complexity classes like , NP, PSPACE, or EXP can be defined solely in terms of the -calculus, but does not
cover sublinear time or space.
Note that our measures still have the well-known size explosion property, where the space measure of
a computation can be exponentially bigger than its time measure. However, our result implies that this
exponential gap disappears once complexity classes are considered instead of concrete computations.
We consider this result a first step towards a solution for the long-standing open problem of whether the
natural measures for time and space of the -calculus are reasonable. Our proof for the weak call-by-value
-calculus is the first proof of reasonability (including both time and space) for a functional language based on
natural measures and enables the formal verification of complexity-theoretic proofs concerning complexity
classes, both on paper and in proof assistants.
The proof idea relies on a hybrid of two simulation strategies of reductions in the weak call-by-value
-calculus by Turing machines, both of which are insufficient if taken alone. The first strategy is the most naive
one in the sense that a reduction sequence is simulated precisely as given by the reduction rules; in particular,
all substitutions are executed immediately. This simulation runs within a constant overhead in space, but the
overhead in time might be exponential. The second strategy is heap-based and relies on structure sharing,
similar to existing compilers of eager functional languages. This strategy only has a polynomial overhead in
time, but the space consumption might require an additional factor of log, which is essentially due to the
size of the pointers required for this strategy. Our main contribution is the construction and verification of a
space-aware interleaving of the two strategies, which is shown to yield both a constant overhead in space and
a polynomial overhead in time
- …