119 research outputs found
Programs as Data Structures in λSF-Calculus
© 2016 The Author(s) Lambda-SF-calculus can represent programs as closed normal forms. In turn, all closed normal forms are data structures, in the sense that their internal structure is accessible through queries defined in the calculus, even to the point of constructing the Goedel number of a program. Thus, program analysis and optimisation can be performed entirely within the calculus, without requiring any meta-level process of quotation to produce a data structure. Lambda-SF-calculus is a confluent, applicative rewriting system derived from lambda-calculus, and the combinatory SF-calculus. Its superior expressive power relative to lambda-calculus is demonstrated by the ability to decide if two programs are syntactically equal, or to determine if a program uses its input. Indeed, there is no homomorphism of applicative rewriting systems from lambda-SF-calculus to lambda-calculus. Program analysis and optimisation can be illustrated by considering the conversion of a programs to combinators. Traditionally, a program p is interpreted using fixpoint constructions that do not have normal forms, but combinatory techniques can be used to block reduction until the program arguments are given. That is, p is interpreted by a closed normal form M. Then factorisation (by F) adapts the traditional account of lambda-abstraction in combinatory logic to convert M to a combinator N that is equivalent to M in the following two senses. First, N is extensionally equivalent to M where extensional equivalence is defined in terms of eta-reduction. Second, the conversion is an intensional equivalence in that it does not lose any information, and so can be reversed by another definable conversion. Further, the standard optimisations of the conversion process are all definable within lambda-SF-calculus, even those involving free variable analysis. Proofs of all theorems in the paper have been verified using the Coq theorem prover
Deduction in TIL: from simple to ramified hierarchy of types
Tichý’s Transparent Intensional Logic (TIL) is an overarching logical
framework apt for the analysis of all sorts of discourse, whether colloquial, scientific,
mathematical or logical. The theory is a procedural (as opposed to denotational) one, according
to which the meaning of an expression is an abstract, extra-linguistic procedure
detailing what operations to apply to what procedural constituents to arrive at the product
(if any) of the procedure that is the object denoted by the expression. Such procedures
are rigorously defined as TIL constructions. Though TIL analytical potential is
very large, deduction in TIL has been rather neglected. Tichý defined a sequent calculus
for pre-1988 TIL, that is TIL based on the simple theory of types. Since then no
other attempt to define a proof calculus for TIL has been presented. The goal of this
paper is to propose a generalization and adjustment of Tichý’s calculus to TIL 2010.
First I briefly recapitulate the rules of simple-typed calculus as presented by Tichý.
Then I propose the adjustments of the calculus so that it be applicable to hyperintensions
within the ramified hierarchy of types. TIL operates with a single procedural semantics
for all kinds of logical-semantic context, be it extensional, intensional or hyperintensional.
I show that operating in a hyperintensional context is far from being technically
trivial. Yet it is feasible. To this end we introduce a substitution method that
operates on hyperintensions. It makes use of a four-place substitution function (called
Sub) defined over hyperintensions.Web of Science20suppl 236
On the Semantics of Intensionality and Intensional Recursion
Intensionality is a phenomenon that occurs in logic and computation. In the
most general sense, a function is intensional if it operates at a level finer
than (extensional) equality. This is a familiar setting for computer
scientists, who often study different programs or processes that are
interchangeable, i.e. extensionally equal, even though they are not implemented
in the same way, so intensionally distinct. Concomitant with intensionality is
the phenomenon of intensional recursion, which refers to the ability of a
program to have access to its own code. In computability theory, intensional
recursion is enabled by Kleene's Second Recursion Theorem. This thesis is
concerned with the crafting of a logical toolkit through which these phenomena
can be studied. Our main contribution is a framework in which mathematical and
computational constructions can be considered either extensionally, i.e. as
abstract values, or intensionally, i.e. as fine-grained descriptions of their
construction. Once this is achieved, it may be used to analyse intensional
recursion.Comment: DPhil thesis, Department of Computer Science & St John's College,
University of Oxfor
Inductive Definition and Domain Theoretic Properties of Fully Abstract
A construction of fully abstract typed models for PCF and PCF^+ (i.e., PCF +
"parallel conditional function"), respectively, is presented. It is based on
general notions of sequential computational strategies and wittingly consistent
non-deterministic strategies introduced by the author in the seventies.
Although these notions of strategies are old, the definition of the fully
abstract models is new, in that it is given level-by-level in the finite type
hierarchy. To prove full abstraction and non-dcpo domain theoretic properties
of these models, a theory of computational strategies is developed. This is
also an alternative and, in a sense, an analogue to the later game strategy
semantics approaches of Abramsky, Jagadeesan, and Malacaria; Hyland and Ong;
and Nickau. In both cases of PCF and PCF^+ there are definable universal
(surjective) functionals from numerical functions to any given type,
respectively, which also makes each of these models unique up to isomorphism.
Although such models are non-omega-complete and therefore not continuous in the
traditional terminology, they are also proved to be sequentially complete (a
weakened form of omega-completeness), "naturally" continuous (with respect to
existing directed "pointwise", or "natural" lubs) and also "naturally"
omega-algebraic and "naturally" bounded complete -- appropriate generalisation
of the ordinary notions of domain theory to the case of non-dcpos.Comment: 50 page
Elaborator reflection : extending Idris in Idris
Many programming languages and proof assistants are defined by elaboration from a high-level language with a great deal of implicit information to a highly explicit core language. In many advanced languages, these elaboration facilities contain powerful tools for program construction, but these tools are rarely designed to be repurposed by users. We describe elaborator reflection, a paradigm for metaprogramming in which the elaboration machinery is made directly available to metaprograms, as well as a concrete realization of elaborator reflection in Idris, a functional language with full dependent types. We demonstrate the applicability of Idris’s reflected elaboration framework to a number of realistic problems, we discuss the motivation for the specific features of its design, and we explore the broader meaning of elaborator reflection as it can relate to other languages.Postprin
Ordinal Type Theory
Higher-order logic, with its type-theoretic apparatus known as the simple theory of types (STT), has increasingly come to be employed in theorizing about properties, relations, and states of affairs—or ‘intensional entities’ for short. This paper argues against this employment of STT and offers an alternative: ordinal type theory (OTT). Very roughly, STT and OTT can be regarded as complementary simplifications of the ‘ramified theory of types’ outlined in the Introduction to Principia Mathematica (on a realist reading). While STT, understood as a theory of intensional entities, retains the Fregean division of properties and relations into a multiplicity of categories according to their adicities and ‘input types’ and discards the division of intensional entities into different ‘orders’, OTT takes the opposite approach: it retains the hierarchy of orders (though with some modifications) and discards the categorisation of properties and relations according to their adicities and input types. In contrast to STT, this latter approach avoids intensional counterparts of the Epimenides and related paradoxes. Fundamental intensional entities lie at the base of the proposed hierarchy and are also given a prominent part to play in the individuation of non-fundamental intensional entities
- …