78 research outputs found
The Safe Lambda Calculus
Safety is a syntactic condition of higher-order grammars that constrains
occurrences of variables in the production rules according to their
type-theoretic order. In this paper, we introduce the safe lambda calculus,
which is obtained by transposing (and generalizing) the safety condition to the
setting of the simply-typed lambda calculus. In contrast to the original
definition of safety, our calculus does not constrain types (to be
homogeneous). We show that in the safe lambda calculus, there is no need to
rename bound variables when performing substitution, as variable capture is
guaranteed not to happen. We also propose an adequate notion of beta-reduction
that preserves safety. In the same vein as Schwichtenberg's 1976
characterization of the simply-typed lambda calculus, we show that the numeric
functions representable in the safe lambda calculus are exactly the
multivariate polynomials; thus conditional is not definable. We also give a
characterization of representable word functions. We then study the complexity
of deciding beta-eta equality of two safe simply-typed terms and show that this
problem is PSPACE-hard. Finally we give a game-semantic analysis of safety: We
show that safe terms are denoted by `P-incrementally justified strategies'.
Consequently pointers in the game semantics of safe lambda-terms are only
necessary from order 4 onwards
Decidability for Non-Standard Conversions in Typed Lambda-Calculi
This thesis studies the decidability of conversions in typed lambda-calculi, along with the algorithms allowing for this decidability. Our study takes in consideration conversions going beyond the traditional beta, eta, or permutative conversions (also called commutative conversions). To decide these conversions, two classes of algorithms compete, the algorithms based on rewriting, here the goal is to decompose and orient the conversion so as to obtain a convergent system, these algorithms then boil down to rewrite the terms until they reach an irreducible forms; and the "reduction free" algorithms where the conversion is decided recursively by a detour via a meta-language. Throughout this thesis, we strive to explain the latter thanks to the former
Applications of Category Theory to Programming and Program Specification
Category theory is proving a useful tool in programming and program specification - not only as a descriptive language but as directly
applicable to programming and specification tasks.
Category theory achieves a level of generality of description at
which computation is still possible. We show that theorems from
category theory often have constructive proofs in the sense that they
may be encoded as programs. In particular we look at the computation
of colimits in categories showing that general theorems give rise to
routines which considerably simplify the rather awkward computation
of colimits.
The general routines arising from categorical constructions can be
used to build programs in the 'combinatorial' style of programming.
We show this with an example - a program to implement the semantics
of a specification language. More importantly, the intimate
relationship between these routines and algebraic specifications
allows us to develop programs from certain forms of specifications.
Later we turn to algebraic specifications themselves and look at
properties of "monadic theories". We establish that, under suitable
conditions:
1. Signatures and presentations may be defined for monadic
theories and free theories on a signature may be
constructed.
2. Theory morphisms give rise to ad junctions between
categories of algebras and moreover a collection of
algebras of a theory give rise to a new theory with
certain properties.
3. Finite colimits and certain factorisations exist in
categories of monadic theories.
4. Many-sorted, order-sorted and even category-sorted
theories may be handled by somewhat extending the notion
of monadic theories.
These results show that monadic theories are sufficiently
well-behaved to be used in the semantics of algebraic specification
languages. Some of the constructions can be encoded as programs by
the techniques mentioned above
Functional programming applied to electrical engineering
Many engineering projects rely on software to execute simulations and analysis of a wide variety of domains. Computer programs are great allies of the engineers when it comes to simulations, including the ones for electromagnetic transient analysis. However, a single programming paradigm (the imperative paradigm) seems to have dominated most of the commercial and academic applications.
This work presents and implements an algorithm to analyse simple electromagnetic transient circuits adopting functional programming. The code uses the nodal analysis found on industry programs like the EMTP (Electromagnetic Transients Program). The results of adopting the Haskell language and functional programming are very favourable to the engineering community: programs with higher chances to have fewer bugs, with concise implementations and with more focus on the mathematical aspects of the algorithm
Polynomial Time Calculi
This dissertation deals with type systems which guarantee polynomial time complexity of typed programs. Such algorithms are commonly regarded as being feasible for practical applications, because their runtime grows reasonably fast for bigger inputs. The implicit complexity community has proposed several type systems for polynomial time in the recent years, each with strong, but different structural restrictions on the permissible algorithms which are necessary to control complexity. Comparisons between the various approaches are hard and this has led to a landscape of islands in the literature of expressible algorithms in each calculus, without many known links between them.
This work chooses Light Affine Logic (LAL) and Hofmann's LFPL, both linearly typed, and studies the connections between them. It is shown that the light iteration in LAL, the fixed point variant of LAL, is expressive enough to allow a (non-trivial) compositional embedding of LFPL. The pull-out trick of LAL is identified as a technique to type certain non-size-increasing algorithms in such a way that they can be iterated. The System T sibling of LAL is developed which seamlessly integrates this technique as a central feature of the iteration scheme and which is proved again correct and complete for polynomial time. Because -iterations of the same level cannot be nested, is further generalised to , which surprisingly can express the impredicative iteration of LFPL and the light iteration of at the same time. Therefore, it subsumes both systems in one, while still being polynomial time normalisable. Hence, this result gives the first bridge between these two islands of implicit computational complexity
Programming Languages and Systems
This open access book constitutes the proceedings of the 28th European Symposium on Programming, ESOP 2019, which took place in Prague, Czech Republic, in April 2019, held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2019
Foundations of Software Science and Computation Structures
This open access book constitutes the proceedings of the 22nd International Conference on Foundations of Software Science and Computational Structures, FOSSACS 2019, which took place in Prague, Czech Republic, in April 2019, held as part of the European Joint Conference on Theory and Practice of Software, ETAPS 2019. The 29 papers presented in this volume were carefully reviewed and selected from 85 submissions. They deal with foundational research with a clear significance for software science
Structured arrows : a type-based framework for structured parallelism
This thesis deals with the important problem of parallelising sequential code. Despite the importance of parallelism in modern computing, writing parallel software still relies on many low-level and often error-prone approaches. These low-level approaches can lead to serious execution problems such as deadlocks and race conditions. Due to the non-deterministic behaviour of most parallel programs, testing parallel software can be both tedious and time-consuming. A way of providing guarantees of correctness for parallel programs would therefore provide significant benefit. Moreover, even if we ignore the problem of correctness, achieving good speedups is not straightforward, since this generally involves rewriting a program to consider a (possibly large) number of alternative parallelisations.
This thesis argues that new languages and frameworks are needed. These language and frameworks must not only support high-level parallel programming constructs, but must also provide predictable cost models for these parallel constructs. Moreover, they need to be built around solid, well-understood theories that ensure that: (a) changes to the source code will not change the functional behaviour of a program, and (b) the speedup obtained by doing the necessary changes is predictable. Algorithmic skeletons are parametric implementations of common patterns of parallelism that provide good abstractions for creating new high-level languages, and also support frameworks for parallel computing that satisfy the correctness and predictability requirements that we require.
This thesis presents a new type-based framework, based on the connection between structured parallelism and structured patterns of recursion, that provides parallel structures as type abstractions that can be used to statically parallelise a program. Specifically, this thesis exploits hylomorphisms as a single, unifying construct to represent the functional behaviour of parallel programs, and to perform correct code rewritings between alternative parallel implementations, represented as algorithmic skeletons. This thesis also defines a mechanism for deriving cost models for parallel constructs from a queue-based operational semantics. In this way, we can provide strong static guarantees about the correctness of a parallel program, while simultaneously achieving predictable speedups.“This work was supported by the University of St Andrews (School of Computer
Science); by the EU FP7 grant “ParaPhrase:Parallel Patterns Adaptive Heterogeneous Multicore Systems” (n. 288570); by the EU H2020
grant “RePhrase: Refactoring Parallel Heterogeneous Resource-Aware Applications
- a Software Engineering Approach” (ICT-644235), by COST
Action IC1202 (TACLe), supported by COST (European Cooperation Science and Technology); and by EPSRC grant “Discovery: Pattern Discovery
and Program Shaping for Manycore Systems” (EP/P020631/1)” -- Acknowledgement
- …