13 research outputs found
Infinite and bi-infinite words with decidable monadic theories
We study word structures of the form (D,<,P) where D is either the naturals or the integers with the natural linear order < and P is a predicate on D. In particular we show: The set of recursive infinite words with decidable monadic second order theories is Sigma_3-complete. We characterise those sets P of integers that yield bi-infinite words with decidable monadic second order theories. We show that such "tame" predicates P exist in every Turing degree. We determine, for a set of integers P, the number of indistinguishable biinfinite words. Through these results we demonstrate similarities and differences between logical properties of infinite and bi-infinite words
The Weak Call-By-Value {\lambda}-Calculus is Reasonable for Both Time and Space
We study the weak call-by-value -calculus as a model for
computational complexity theory and establish the natural measures for time and
space -- the number of beta-reductions and the size of the largest term in a
computation -- as reasonable measures with respect to the invariance thesis of
Slot and van Emde Boas [STOC~84]. More precisely, we show that, using those
measures, Turing machines and the weak call-by-value -calculus can
simulate each other within a polynomial overhead in time and a constant factor
overhead in space for all computations that terminate in (encodings) of 'true'
or 'false'. We consider this result as a solution to the long-standing open
problem, explicitly posed by Accattoli [ENTCS~18], of whether the natural
measures for time and space of the -calculus are reasonable, at least
in case of weak call-by-value evaluation.
Our proof relies on a hybrid of two simulation strategies of reductions in
the weak call-by-value -calculus by Turing machines, both of which are
insufficient if taken alone. The first strategy is the most naive one in the
sense that a reduction sequence is simulated precisely as given by the
reduction rules; in particular, all substitutions are executed immediately.
This simulation runs within a constant overhead in space, but the overhead in
time might be exponential. The second strategy is heap-based and relies on
structure sharing, similar to existing compilers of eager functional languages.
This strategy only has a polynomial overhead in time, but the space consumption
might require an additional factor of , which is essentially due to the
size of the pointers required for this strategy. Our main contribution is the
construction and verification of a space-aware interleaving of the two
strategies, which is shown to yield both a constant overhead in space and a
polynomial overhead in time
The weak call-by-value λ-calculus is reasonable for both time and space
We study the weak call-by-value -calculus as a model for computational complexity theory and establish the
natural measures for time and space Ð the number of beta-reduction steps and the size of the largest term
in a computation Ð as reasonable measures with respect to the invariance thesis of Slot and van Emde Boas
from 1984. More precisely, we show that, using those measures, Turing machines and the weak call-by-value
-calculus can simulate each other within a polynomial overhead in time and a constant factor overhead in
space for all computations terminating in (encodings of) łtruež or łfalsež. The simulation yields that standard
complexity classes like , NP, PSPACE, or EXP can be defined solely in terms of the -calculus, but does not
cover sublinear time or space.
Note that our measures still have the well-known size explosion property, where the space measure of
a computation can be exponentially bigger than its time measure. However, our result implies that this
exponential gap disappears once complexity classes are considered instead of concrete computations.
We consider this result a first step towards a solution for the long-standing open problem of whether the
natural measures for time and space of the -calculus are reasonable. Our proof for the weak call-by-value
-calculus is the first proof of reasonability (including both time and space) for a functional language based on
natural measures and enables the formal verification of complexity-theoretic proofs concerning complexity
classes, both on paper and in proof assistants.
The proof idea relies on a hybrid of two simulation strategies of reductions in the weak call-by-value
-calculus by Turing machines, both of which are insufficient if taken alone. The first strategy is the most naive
one in the sense that a reduction sequence is simulated precisely as given by the reduction rules; in particular,
all substitutions are executed immediately. This simulation runs within a constant overhead in space, but the
overhead in time might be exponential. The second strategy is heap-based and relies on structure sharing,
similar to existing compilers of eager functional languages. This strategy only has a polynomial overhead in
time, but the space consumption might require an additional factor of log, which is essentially due to the
size of the pointers required for this strategy. Our main contribution is the construction and verification of a
space-aware interleaving of the two strategies, which is shown to yield both a constant overhead in space and
a polynomial overhead in time
Varieties of Data Languages
We establish an Eilenberg-type correspondence for data languages, i.e.
languages over an infinite alphabet. More precisely, we prove that there is a
bijective correspondence between varieties of languages recognized by
orbit-finite nominal monoids and pseudovarieties of such monoids. This is the
first result of this kind for data languages. Our approach makes use of nominal
Stone duality and a recent category theoretic generalization of Birkhoff-type
HSP theorems that we instantiate here for the category of nominal sets. In
addition, we prove an axiomatic characterization of weak pseudovarieties as
those classes of orbit-finite monoids that can be specified by sequences of
nominal equations, which provides a nominal version of a classical theorem of
Eilenberg and Sch\"utzenberger
A type-assignment of linear erasure and duplication
We introduce , a type-assignment system for the linear -calculus that extends second-order , i.e.,
intuitionistic multiplicative Linear Logic, by means of logical rules that
weaken and contract assumptions, but in a purely linear setting.
enjoys both a mildly weakened cut-elimination, whose computational cost is
cubic, and Subject reduction. A translation of into
exists such that the derivations of the former can
exponentially compress the dimension of the derivations in the latter.
allows for a modular and compact representation of boolean
circuits, directly encoding the fan-out nodes, by contraction, and disposing
garbage, by weakening. It can also represent natural numbers with terms very
close to standard Church numerals which, moreover, apply to Hereditarily Finite
Permutations, i.e. a group structure that exists inside the linear -calculus.Comment: 43 pages (10 pages of technical appendix). The final version will
appear on Theoretical Computer Science
https://doi.org/10.1016/j.tcs.2020.05.00
Tameness and the power of programs over monoids in DA
The program-over-monoid model of computation originates with Barrington's
proof that the model captures the complexity class . Here we
make progress in understanding the subtleties of the model. First, we identify
a new tameness condition on a class of monoids that entails a natural
characterization of the regular languages recognizable by programs over monoids
from the class. Second, we prove that the class known as
satisfies tameness and hence that the regular languages recognized by programs
over monoids in are precisely those recognizable in the classical
sense by morphisms from . Third, we show by contrast that the
well studied class of monoids called is not tame. Finally, we
exhibit a program-length-based hierarchy within the class of languages
recognized by programs over monoids from