395 research outputs found

    Very Simple Chaitin Machines for Concrete AIT

    Full text link
    In 1975, Chaitin introduced his celebrated Omega number, the halting probability of a universal Chaitin machine, a universal Turing machine with a prefix-free domain. The Omega number's bits are {\em algorithmically random}--there is no reason the bits should be the way they are, if we define ``reason'' to be a computable explanation smaller than the data itself. Since that time, only {\em two} explicit universal Chaitin machines have been proposed, both by Chaitin himself. Concrete algorithmic information theory involves the study of particular universal Turing machines, about which one can state theorems with specific numerical bounds, rather than include terms like O(1). We present several new tiny Chaitin machines (those with a prefix-free domain) suitable for the study of concrete algorithmic information theory. One of the machines, which we call Keraia, is a binary encoding of lambda calculus based on a curried lambda operator. Source code is included in the appendices. We also give an algorithm for restricting the domain of blank-endmarker machines to a prefix-free domain over an alphabet that does not include the endmarker; this allows one to take many universal Turing machines and construct universal Chaitin machines from them

    High-level signatures and initial semantics

    Get PDF
    We present a device for specifying and reasoning about syntax for datatypes, programming languages, and logic calculi. More precisely, we study a notion of signature for specifying syntactic constructions. In the spirit of Initial Semantics, we define the syntax generated by a signature to be the initial object---if it exists---in a suitable category of models. In our framework, the existence of an associated syntax to a signature is not automatically guaranteed. We identify, via the notion of presentation of a signature, a large class of signatures that do generate a syntax. Our (presentable) signatures subsume classical algebraic signatures (i.e., signatures for languages with variable binding, such as the pure lambda calculus) and extend them to include several other significant examples of syntactic constructions. One key feature of our notions of signature, syntax, and presentation is that they are highly compositional, in the sense that complex examples can be obtained by assembling simpler ones. Moreover, through the Initial Semantics approach, our framework provides, beyond the desired algebra of terms, a well-behaved substitution and the induction and recursion principles associated to the syntax. This paper builds upon ideas from a previous attempt by Hirschowitz-Maggesi, which, in turn, was directly inspired by some earlier work of Ghani-Uustalu-Hamana and Matthes-Uustalu. The main results presented in the paper are computer-checked within the UniMath system.Comment: v2: extended version of the article as published in CSL 2018 (http://dx.doi.org/10.4230/LIPIcs.CSL.2018.4); list of changes given in Section 1.5 of the paper; v3: small corrections throughout the paper, no major change

    Multi-dimensional Type Theory: Rules, Categories, and Combinators for Syntax and Semantics

    Full text link
    We investigate the possibility of modelling the syntax and semantics of natural language by constraints, or rules, imposed by the multi-dimensional type theory Nabla. The only multiplicity we explicitly consider is two, namely one dimension for the syntax and one dimension for the semantics, but the general perspective is important. For example, issues of pragmatics could be handled as additional dimensions. One of the main problems addressed is the rather complicated repertoire of operations that exists besides the notion of categories in traditional Montague grammar. For the syntax we use a categorial grammar along the lines of Lambek. For the semantics we use so-called lexical and logical combinators inspired by work in natural logic. Nabla provides a concise interpretation and a sequent calculus as the basis for implementations.Comment: 20 page

    Foundations for structured programming with GADTs

    Get PDF
    GADTs are at the cutting edge of functional programming and become more widely used every day. Nevertheless, the semantic foundations underlying GADTs are not well understood. In this paper we solve this problem by showing that the standard theory of data types as carriers of initial algebras of functors can be extended from algebraic and nested data types to GADTs. We then use this observation to derive an initial algebra semantics for GADTs, thus ensuring that all of the accumulated knowledge about initial algebras can be brought to bear on them. Next, we use our initial algebra semantics for GADTs to derive expressive and principled tools — analogous to the well-known and widely-used ones for algebraic and nested data types — for reasoning about, programming with, and improving the performance of programs involving, GADTs; we christen such a collection of tools for a GADT an initial algebra package. Along the way, we give a constructive demonstration that every GADT can be reduced to one which uses only the equality GADT and existential quantification. Although other such reductions exist in the literature, ours is entirely local, is independent of any particular syntactic presentation of GADTs, and can be implemented in the host language, rather than existing solely as a metatheoretical artifact. The main technical ideas underlying our approach are (i) to modify the notion of a higher-order functor so that GADTs can be seen as carriers of initial algebras of higher-order functors, and (ii) to use left Kan extensions to trade arbitrary GADTs for simpler-but-equivalent ones for which initial algebra semantics can be derive

    Modular Normalization with Types

    Get PDF
    With the increasing use of software in today’s digital world, software is becoming more and more complex and the cost of developing and maintaining software has skyrocketed. It has become pressing to develop software using effective tools that reduce this cost. Programming language research aims to develop such tools using mathematically rigorous foundations. A recurring and central concept in programming language research is normalization: the process of transforming a complex expression in a language to a canonical form while preserving its meaning. Normalization has compelling benefits in theory and practice, but is extremely difficult to achieve. Several program transformations that are used to optimise programs, prove properties of languages and check program equivalence, for instance, are after all instances of normalization, but they are seldom viewed as such.Viewed through the lens of current methods, normalization lacks the ability to be broken into sub-problems and solved independently, i.e., lacks modularity. To make matters worse, such methods rely excessively on the syntax of the language, making the resulting normalization algorithms brittle and sensitive to changes in the syntax. When the syntax of the language evolves due to modification or extension, as it almost always does in practice, the normalization algorithm may need to be revisited entirely. To circumvent these problems, normalization is currently either abandoned entirely or concrete instances of normalization are achieved using ad hoc means specific to a particular language. Continuing this trend in programming language research poses the risk of building on a weak foundation where languages either lack fundamental properties that follow from normalization or several concrete instances end up repeated in an ad hoc manner that lacks reusability.This thesis advocates for the use of type-directed Normalization by Evaluation (NbE) to develop normalization algorithms. NbE is a technique that provides an opportunity for a modular implementation of normalization algorithms by allowing us to disentangle the syntax of a language from its semantics. Types further this opportunity by allowing us to dissect a language into isolated fragments, such as functions and products, with an individual specification of syntax and semantics. To illustrate type-directed NbE in context, we develop NbE algorithms and show their applicability for typed programming language calculi in three different domains (modal types, static information-flow control and categorical combinators) and for a family of embedded-domain specific languages in Haskell

    Be My Guest: Normalizing and Compiling Programs using a Host Language

    Get PDF
    In programming language research, normalization is a process of fundamental importance to the theory of computing and reasoning about programs.In practice, on the other hand, compilation is a process that transforms programs in a language to machine code, and thus makes the programminglanguage a usable one. In this thesis, we investigate means of normalizing and compiling programs in a language using another language as the "host".Leveraging a host to work with programs of a "guest" language enables reuse of the host\u27s features that would otherwise be strenuous to develop.The specific tools of interest are Normalization by Evaluation and Embedded Domain-Specific Languages, both of which rely on a host language for their purposes. These tools are applied to solve problems in three different domains: to show that exponentials (or closures) can be eliminated from a categorical combinatory calculus, to propose a new proof technique based on normalization for showing noninterference, and to enable the programming of resource-constrained IoT devices from Haskell
    corecore