5 research outputs found
The Weak Call-By-Value {\lambda}-Calculus is Reasonable for Both Time and Space
We study the weak call-by-value -calculus as a model for
computational complexity theory and establish the natural measures for time and
space -- the number of beta-reductions and the size of the largest term in a
computation -- as reasonable measures with respect to the invariance thesis of
Slot and van Emde Boas [STOC~84]. More precisely, we show that, using those
measures, Turing machines and the weak call-by-value -calculus can
simulate each other within a polynomial overhead in time and a constant factor
overhead in space for all computations that terminate in (encodings) of 'true'
or 'false'. We consider this result as a solution to the long-standing open
problem, explicitly posed by Accattoli [ENTCS~18], of whether the natural
measures for time and space of the -calculus are reasonable, at least
in case of weak call-by-value evaluation.
Our proof relies on a hybrid of two simulation strategies of reductions in
the weak call-by-value -calculus by Turing machines, both of which are
insufficient if taken alone. The first strategy is the most naive one in the
sense that a reduction sequence is simulated precisely as given by the
reduction rules; in particular, all substitutions are executed immediately.
This simulation runs within a constant overhead in space, but the overhead in
time might be exponential. The second strategy is heap-based and relies on
structure sharing, similar to existing compilers of eager functional languages.
This strategy only has a polynomial overhead in time, but the space consumption
might require an additional factor of , which is essentially due to the
size of the pointers required for this strategy. Our main contribution is the
construction and verification of a space-aware interleaving of the two
strategies, which is shown to yield both a constant overhead in space and a
polynomial overhead in time
The weak call-by-value λ-calculus is reasonable for both time and space
We study the weak call-by-value -calculus as a model for computational complexity theory and establish the
natural measures for time and space Ð the number of beta-reduction steps and the size of the largest term
in a computation Ð as reasonable measures with respect to the invariance thesis of Slot and van Emde Boas
from 1984. More precisely, we show that, using those measures, Turing machines and the weak call-by-value
-calculus can simulate each other within a polynomial overhead in time and a constant factor overhead in
space for all computations terminating in (encodings of) łtruež or łfalsež. The simulation yields that standard
complexity classes like , NP, PSPACE, or EXP can be defined solely in terms of the -calculus, but does not
cover sublinear time or space.
Note that our measures still have the well-known size explosion property, where the space measure of
a computation can be exponentially bigger than its time measure. However, our result implies that this
exponential gap disappears once complexity classes are considered instead of concrete computations.
We consider this result a first step towards a solution for the long-standing open problem of whether the
natural measures for time and space of the -calculus are reasonable. Our proof for the weak call-by-value
-calculus is the first proof of reasonability (including both time and space) for a functional language based on
natural measures and enables the formal verification of complexity-theoretic proofs concerning complexity
classes, both on paper and in proof assistants.
The proof idea relies on a hybrid of two simulation strategies of reductions in the weak call-by-value
-calculus by Turing machines, both of which are insufficient if taken alone. The first strategy is the most naive
one in the sense that a reduction sequence is simulated precisely as given by the reduction rules; in particular,
all substitutions are executed immediately. This simulation runs within a constant overhead in space, but the
overhead in time might be exponential. The second strategy is heap-based and relies on structure sharing,
similar to existing compilers of eager functional languages. This strategy only has a polynomial overhead in
time, but the space consumption might require an additional factor of log, which is essentially due to the
size of the pointers required for this strategy. Our main contribution is the construction and verification of a
space-aware interleaving of the two strategies, which is shown to yield both a constant overhead in space and
a polynomial overhead in time
Transparent Synchronous Dataflow
Dataflow programming is a popular and convenient programming paradigm in
systems modelling, optimisation, and machine learning. It has a number of
advantages, for instance the lacks of control flow allows computation to be
carried out in parallel as well as in distributed machines. More recently the
idea of dataflow graphs has also been brought into the design of various deep
learning frameworks. They facilitate an easy and efficient implementation of
automatic differentiation, which is the heart of modern deep learning paradigm.
[abstract abridged
On Reasonable Space and Time Cost Models for the λ-Calculus
Slot and van Emde Boas Invariance Thesis states that a time (respectively, space) cost model is reasonable for a computational model C if there are mutual simulations between Turing machines and C such that the overhead is polynomial in time (respectively, linear in space). The rationale is that under the Invariance Thesis, complexity classes such as LOGSPACE, P, PSPACE, become robust, i.e. machine independent.
In this dissertation, we want to find out if it possible to define a reasonable space cost model for the lambda-calculus, the paradigmatic model for functional programming languages. We start by considering an unusual evaluation mechanism for the lambda-calculus, based on Girard's Geometry of Interaction, that was conjectured to be the key ingredient to
obtain a space reasonable cost model. By a fine complexity analysis of this schema, based on new variants of non-idempotent intersection types, we disprove this conjecture. Then, we change the target of our analysis. We consider a variant over Krivine's abstract machine, a standard evaluation mechanism for the call-by-name lambda-calculus, optimized for space complexity, and implemented without any pointer. A fine analysis of the execution of (a refined version of) the encoding of Turing machines into the lambda-calculus allows us to conclude that the space consumed by this machine is
indeed a reasonable space cost model. In particular, for the first time we are able to measure also
sub-linear space complexities. Moreover, we transfer this result to the call-by-value case.
Finally, we provide also an intersection type system that characterizes compositionally this new reasonable space measure. This is done through a minimal, yet non trivial, modification of the original de Carvalho type system
Transparent synchronous dataflow: a functional paradigm for systems modelling and optimisation
System modelling is the use of mathematical formalisms to model real world systems for the purpose of analysis, simulation and prediction. One of the most common ways to model a system is to create the dataflow among its various components. There are two main approaches on how dataflow graphs are constructed in these system modelling frameworks: ‘define-and-run’ vs ‘define-by-run’. The former approach first creates a dataflow graph and then executes it by pushing data into it. The latter however constructs the graph while computing with data on-the-fly. ‘Define-and-run’ is usually more efficient because many graph optimisations can be applied; ‘Define-by-run’ however handles dynamic models better. This thesis aims to develop a new functional paradigm for systems modelling and optimisation that exhibits properties of both approaches where dataflow graphs are dynamic but efficient.
We propose a new functional language, namely transparent synchronous dataflow (TSD), where dataflow graphs are constructed transparently with imperative commands to manipulate them explicitly; together with a synchronous mode of change propagation. The semantics of the language is designed on top of an unconventional graph abstract machine, Dynamic Geometry of Interaction Machine (DGoIM), which is natural for manipulating dataflow graphs. By using this semantics, the language is proved to be sound and efficient. Several experimental implementations were also created, including a native compiler for DGoIM and OCaml implementations for TSD