43 research outputs found
Practical Subtyping for System F with Sized (Co-)Induction
We present a rich type system with subtyping for an extension of System F.
Our type constructors include sum and product types, universal and existential
quantifiers, inductive and coinductive types. The latter two size annotations
allowing the preservation of size invariants. For example it is possible to
derive the termination of the quicksort by showing that partitioning a list
does not increase its size. The system deals with complex programs involving
mixed induction and coinduction, or even mixed (co-)induction and polymorphism
(as for Scott-encoded datatypes). One of the key ideas is to completely
separate the induction on sizes from the notion of recursive programs. We use
the size change principle to check that the proof is well-founded, not that the
program terminates. Termination is obtained by a strong normalization proof.
Another key idea is the use symbolic witnesses to handle quantifiers of all
sorts. To demonstrate the practicality of our system, we provide an
implementation that accepts all the examples discussed in the paper and much
more
10351 Abstracts Collection -- Modelling, Controlling and Reasoning About State
From 29 August 2010 to 3 September 2010, the Dagstuhl Seminar 10351
``Modelling, Controlling and Reasoning About State \u27\u27 was held in
Schloss Dagstuhl~--~Leibniz Center for Informatics. During the
seminar, several participants presented their current research, and
ongoing work and open problems were discussed. Abstracts of the
presentations given during the seminar as well as abstracts of seminar
results and ideas are put together in this paper. Links to extended
abstracts or full papers are provided, if available
Linear Haskell: practical linearity in a higher-order polymorphic language
Linear type systems have a long and storied history, but not a clear path
forward to integrate with existing languages such as OCaml or Haskell. In this
paper, we study a linear type system designed with two crucial properties in
mind: backwards-compatibility and code reuse across linear and non-linear users
of a library. Only then can the benefits of linear types permeate conventional
functional programming. Rather than bifurcate types into linear and non-linear
counterparts, we instead attach linearity to function arrows. Linear functions
can receive inputs from linearly-bound values, but can also operate over
unrestricted, regular values.
To demonstrate the efficacy of our linear type system - both how easy it can
be integrated in an existing language implementation and how streamlined it
makes it to write programs with linear types - we implemented our type system
in GHC, the leading Haskell compiler, and demonstrate two kinds of applications
of linear types: mutable data with pure interfaces; and enforcing protocols in
I/O-performing functions
Investigations in intersection types : confluence, and semantics of expansion in the -calculus, and a type error slicing method
Type systems were invented in the early 1900s to provide foundations for Mathematics
where types were used to avoid paradoxes. Type systems have then been
developed and extended throughout the years to serve different purposes such as efficiency
or expressiveness. The λ-calculus is used in programming languages, logic,
mathematics, and linguistics. Intersection types are a kind of types used for building
semantic models of the λ-calculus and for static analysis of computer programs.
The confluence property was used to prove the λ-calculusâ consistency and the
uniqueness of normal forms. Confluence is useful to show that logics are sensibly
designed, and to make equality decision procedures for use in theorem provers.
Some proofs of the λ-calculusâ confluence are based on syntactic concepts (reduction
relations and λ-term sets) and some on semantic concepts (type interpretations).
Part I of this thesis presents an original syntactic proof that is a simplification of
a semantic proof based on a sound type interpretation w.r.t. an intersection type
system. Our proof can be seen as bridging some semantic and syntactic proofs.
Expansion is an operation on typings (pairs of type environments and result
types) in type systems for the λ-calculus. It was introduced to prove that the principal
typing property (i.e., that every typable term has a strongest typing) holds
in intersection type systems. Expansion variables were introduced to simplify the
expansion mechanism. Part II of this thesis presents a complete realisability semantics
w.r.t. an intersection type system with infinitely many expansion variables.
This represents the first study on semantics of expansion. Providing sound (and
complete) realisability semantics allows one to study the algorithmic behaviour of
typed λ-terms through their types w.r.t. a type system. We believe such semantics
will cast some light on the not yet well understood expansion operation.
Intersection types were used in a type error slicer for the SML programming
language. Existing compilers for many languages have confusing type error messages.
Type error slicing (TES) helps the programmer by isolating the part of a program
contributing to a type error (a slice). TES was initially done for a tiny toy language
(the λ-calculus with polymorphic let-expressions). Extending TES to a full language
is extremely challenging, and for SML we needed a number of innovations. Some
issues would be faced for any language, and some are SML-specific but representative
of the complexity of language-specific issues likely to be faced for other languages.
Part III of this thesis solves both kinds of issues and presents an original, simple,
and general constraint system for providing type error slices for ill-typed programs.
We believe TES helps demystify language features known to confuse users
A principled approach to programming with nested types in Haskell
Initial algebra semantics is one of the cornerstones of the theory of modern functional programming languages. For each inductive data type, it provides a Church encoding for that type, a build combinator which constructs data of that type, a fold combinator which encapsulates structured recursion over data of that type, and a fold/build rule which optimises modular programs by eliminating from them data constructed using the buildcombinator, and immediately consumed using the foldcombinator, for that type. It has long been thought that initial algebra semantics is not expressive enough to provide a similar foundation for programming with nested types in Haskell. Specifically, the standard folds derived from initial algebra semantics have been considered too weak to capture commonly occurring patterns of recursion over data of nested types in Haskell, and no build combinators or fold/build rules have until now been defined for nested types. This paper shows that standard folds are, in fact, sufficiently expressive for programming with nested types in Haskell. It also defines buildcombinators and fold/build fusion rules for nested types. It thus shows how initial algebra semantics provides a principled, expressive, and elegant foundation for programming with nested types in Haskell
Transporting Functions across Ornaments
Programming with dependent types is a blessing and a curse. It is a blessing
to be able to bake invariants into the definition of data-types: we can finally
write correct-by-construction software. However, this extreme accuracy is also
a curse: a data-type is the combination of a structuring medium together with a
special purpose logic. These domain-specific logics hamper any effort of code
reuse among similarly structured data.
In this paper, we exorcise our data-types by adapting the notion of ornament
to our universe of inductive families. We then show how code reuse can be
achieved by ornamenting functions. Using these functional ornament, we capture
the relationship between functions such as the addition of natural numbers and
the concatenation of lists. With this knowledge, we demonstrate how the
implementation of the former informs the implementation of the latter: the user
can ask the definition of addition to be lifted to lists and she will only be
asked the details necessary to carry on adding lists rather than numbers.
Our presentation is formalised in a type theory with a universe of data-types
and all our constructions have been implemented as generic programs, requiring
no extension to the type theory
Trace semantics for polymorphic references
Research supported by the Engineering and Physical Sciences Research Council (EP/L022478/1) and the Royal Academy of Engineering
Search for Program Structure
The community of programming language research loves the Curry-Howard correspondence between proofs and programs. Cut-elimination as computation, theorems for free, \u27call/cc\u27 as excluded middle, dependently typed languages as proof assistants, etc.
Yet we have, for all these years, missed an obvious observation: "the structure of programs corresponds to the structure of proof search". For pure programs and intuitionistic logic, more is known about the latter than the former. We think we know what programs are, but logicians know better!
To motivate the study of proof search for program structure, we retrace recent research on applying focusing to study the canonical structure of simply-typed lambda-terms. We then motivate the open problem of extending canonical forms to support richer type systems, such as polymorphism, by discussing a few enticing applications of more canonical program representations
Bootstrapping Inductive and Coinductive Types in HasCASL
We discuss the treatment of initial datatypes and final process types in the
wide-spectrum language HasCASL. In particular, we present specifications that
illustrate how datatypes and process types arise as bootstrapped concepts using
HasCASL's type class mechanism, and we describe constructions of types of
finite and infinite trees that establish the conservativity of datatype and
process type declarations adhering to certain reasonable formats. The latter
amounts to modifying known constructions from HOL to avoid unique choice; in
categorical terminology, this means that we establish that quasitoposes with an
internal natural numbers object support initial algebras and final coalgebras
for a range of polynomial functors, thereby partially generalising
corresponding results from topos theory. Moreover, we present similar
constructions in categories of internal complete partial orders in
quasitoposes
Recommended from our members
Reconciling Shannon and Scott with a Lattice of Computable Information
This paper proposes a reconciliation of two different theories of information. The first, originally proposed in a lesser-known work by Claude Shannon (some five years after the publication of his celebrated quantitative theory of communication), describes how the information content of channels can be described qualitatively, but still abstractly, in terms of information elements, where information elements can be viewed as equivalence relations over the data source domain. Shannon showed that these elements have a partial ordering, expressing when one information element is more informative than another, and that these partially ordered information elements form a complete lattice. In the context of security and information flow this structure has been independently rediscovered several times, and used as a foundation for understanding and reasoning about information flow. The second theory of information is Dana Scott\u27s domain theory, a mathematical framework for giving meaning to programs as continuous functions over a particular topology. Scott\u27s partial ordering also represents when one element is more informative than another, but in the sense of computational progress - i.e. when one element is a more defined or evolved version of another. To give a satisfactory account of information flow in computer programs it is necessary to consider both theories together, in order to understand not only what information is conveyed by a program (viewed as a channel, \ue0 la Shannon) but also how the precision with which that information can be observed is determined by the definedness of its encoding (\ue0 la Scott). To this end we show how these theories can be fruitfully combined, by defining the Lattice of Computable Information (LoCI), a lattice of preorders rather than equivalence relations. LoCI retains the rich lattice structure of Shannon\u27s theory, filters out elements that do not make computational sense, and refines the remaining information elements to reflect how Scott\u27s ordering captures possible varieties in the way that information is presented. We show how the new theory facilitates the first general definition of termination-insensitive information flow properties, a weakened form of information flow property commonly targeted by static program analyses