4,055 research outputs found
Very Simple Chaitin Machines for Concrete AIT
In 1975, Chaitin introduced his celebrated Omega number, the halting
probability of a universal Chaitin machine, a universal Turing machine with a
prefix-free domain. The Omega number's bits are {\em algorithmically
random}--there is no reason the bits should be the way they are, if we define
``reason'' to be a computable explanation smaller than the data itself. Since
that time, only {\em two} explicit universal Chaitin machines have been
proposed, both by Chaitin himself.
Concrete algorithmic information theory involves the study of particular
universal Turing machines, about which one can state theorems with specific
numerical bounds, rather than include terms like O(1). We present several new
tiny Chaitin machines (those with a prefix-free domain) suitable for the study
of concrete algorithmic information theory. One of the machines, which we call
Keraia, is a binary encoding of lambda calculus based on a curried lambda
operator. Source code is included in the appendices.
We also give an algorithm for restricting the domain of blank-endmarker
machines to a prefix-free domain over an alphabet that does not include the
endmarker; this allows one to take many universal Turing machines and construct
universal Chaitin machines from them
Wave-Style Token Machines and Quantum Lambda Calculi
Particle-style token machines are a way to interpret proofs and programs,
when the latter are written following the principles of linear logic. In this
paper, we show that token machines also make sense when the programs at hand
are those of a simple quantum lambda-calculus with implicit qubits. This,
however, requires generalising the concept of a token machine to one in which
more than one particle travel around the term at the same time. The presence of
multiple tokens is intimately related to entanglement and allows us to give a
simple operational semantics to the calculus, coherently with the principles of
quantum computation.Comment: In Proceedings LINEARITY 2014, arXiv:1502.0441
First Class Call Stacks: Exploring Head Reduction
Weak-head normalization is inconsistent with functional extensionality in the
call-by-name -calculus. We explore this problem from a new angle via
the conflict between extensionality and effects. Leveraging ideas from work on
the -calculus with control, we derive and justify alternative
operational semantics and a sequence of abstract machines for performing head
reduction. Head reduction avoids the problems with weak-head reduction and
extensionality, while our operational semantics and associated abstract
machines show us how to retain weak-head reduction's ease of implementation.Comment: In Proceedings WoC 2015, arXiv:1606.0583
On the enumeration of closures and environments with an application to random generation
Environments and closures are two of the main ingredients of evaluation in
lambda-calculus. A closure is a pair consisting of a lambda-term and an
environment, whereas an environment is a list of lambda-terms assigned to free
variables. In this paper we investigate some dynamic aspects of evaluation in
lambda-calculus considering the quantitative, combinatorial properties of
environments and closures. Focusing on two classes of environments and
closures, namely the so-called plain and closed ones, we consider the problem
of their asymptotic counting and effective random generation. We provide an
asymptotic approximation of the number of both plain environments and closures
of size . Using the associated generating functions, we construct effective
samplers for both classes of combinatorial structures. Finally, we discuss the
related problem of asymptotic counting and random generation of closed
environemnts and closures
Beta Reduction is Invariant, Indeed (Long Version)
Slot and van Emde Boas' weak invariance thesis states that reasonable
machines can simulate each other within a polynomially overhead in time. Is
-calculus a reasonable machine? Is there a way to measure the
computational complexity of a -term? This paper presents the first
complete positive answer to this long-standing problem. Moreover, our answer is
completely machine-independent and based over a standard notion in the theory
of -calculus: the length of a leftmost-outermost derivation to normal
form is an invariant cost model. Such a theorem cannot be proved by directly
relating -calculus with Turing machines or random access machines,
because of the size explosion problem: there are terms that in a linear number
of steps produce an exponentially long output. The first step towards the
solution is to shift to a notion of evaluation for which the length and the
size of the output are linearly related. This is done by adopting the linear
substitution calculus (LSC), a calculus of explicit substitutions modelled
after linear logic and proof-nets and admitting a decomposition of
leftmost-outermost derivations with the desired property. Thus, the LSC is
invariant with respect to, say, random access machines. The second step is to
show that LSC is invariant with respect to the -calculus. The size
explosion problem seems to imply that this is not possible: having the same
notions of normal form, evaluation in the LSC is exponentially longer than in
the -calculus. We solve such an impasse by introducing a new form of
shared normal form and shared reduction, deemed useful. Useful evaluation
avoids those steps that only unshare the output without contributing to
-redexes, i.e., the steps that cause the blow-up in size.Comment: 29 page
An Invariant Cost Model for the Lambda Calculus
We define a new cost model for the call-by-value lambda-calculus satisfying
the invariance thesis. That is, under the proposed cost model, Turing machines
and the call-by-value lambda-calculus can simulate each other within a
polynomial time overhead. The model only relies on combinatorial properties of
usual beta-reduction, without any reference to a specific machine or evaluator.
In particular, the cost of a single beta reduction is proportional to the
difference between the size of the redex and the size of the reduct. In this
way, the total cost of normalizing a lambda term will take into account the
size of all intermediate results (as well as the number of steps to normal
form).Comment: 19 page
QPCF: higher order languages and quantum circuits
qPCF is a paradigmatic quantum programming language that ex- tends PCF with
quantum circuits and a quantum co-processor. Quantum circuits are treated as
classical data that can be duplicated and manipulated in flexible ways by means
of a dependent type system. The co-processor is essentially a standard QRAM
device, albeit we avoid to store permanently quantum states in between two
co-processor's calls. Despite its quantum features, qPCF retains the classic
programming approach of PCF. We introduce qPCF syntax, typing rules, and its
operational semantics. We prove fundamental properties of the system, such as
Preservation and Progress Theorems. Moreover, we provide some higher-order
examples of circuit encoding
Kolmogorov Complexity in perspective. Part II: Classification, Information Processing and Duality
We survey diverse approaches to the notion of information: from Shannon
entropy to Kolmogorov complexity. Two of the main applications of Kolmogorov
complexity are presented: randomness and classification. The survey is divided
in two parts published in a same volume. Part II is dedicated to the relation
between logic and information system, within the scope of Kolmogorov
algorithmic information theory. We present a recent application of Kolmogorov
complexity: classification using compression, an idea with provocative
implementation by authors such as Bennett, Vitanyi and Cilibrasi. This stresses
how Kolmogorov complexity, besides being a foundation to randomness, is also
related to classification. Another approach to classification is also
considered: the so-called "Google classification". It uses another original and
attractive idea which is connected to the classification using compression and
to Kolmogorov complexity from a conceptual point of view. We present and unify
these different approaches to classification in terms of Bottom-Up versus
Top-Down operational modes, of which we point the fundamental principles and
the underlying duality. We look at the way these two dual modes are used in
different approaches to information system, particularly the relational model
for database introduced by Codd in the 70's. This allows to point out diverse
forms of a fundamental duality. These operational modes are also reinterpreted
in the context of the comprehension schema of axiomatic set theory ZF. This
leads us to develop how Kolmogorov's complexity is linked to intensionality,
abstraction, classification and information system.Comment: 43 page
- …