26 research outputs found
Enriched universal algebra
Following the classical approach of Birkhoff, we introduce enriched universal
algebra. Given a suitable base of enrichment , we define a language
to be a collection of -ary function symbols whose arities
are taken among the objects of . The class of -terms is
constructed recursively from the symbols of , the morphisms in
, and by incorporating the monoidal structure of .
Then, -structures and interpretations of terms are defined, leading
to enriched equational theories. In this framework we prove several fundamental
theorems of universal algebra, including the characterization of algebras for
finitary monads on as models of an equational theories, and
several Birkhoff-type theorems
Rewriting Modulo Traced Comonoid Structure
In this paper we adapt previous work on rewriting string diagrams using hypergraphs to the case where the underlying category has a traced comonoid structure, in which wires can be forked and the outputs of a morphism can be connected to its input. Such a structure is particularly interesting because any traced Cartesian (dataflow) category has an underlying traced comonoid structure. We show that certain subclasses of hypergraphs are fully complete for traced comonoid categories: that is to say, every term in such a category has a unique corresponding hypergraph up to isomorphism, and from every hypergraph with the desired properties, a unique term in the category can be retrieved up to the axioms of traced comonoid categories. We also show how the framework of double pushout rewriting (DPO) can be adapted for traced comonoid categories by characterising the valid pushout complements for rewriting in our setting. We conclude by presenting a case study in the form of recent work on an equational theory for sequential circuits: circuits built from primitive logic gates with delay and feedback. The graph rewriting framework allows for the definition of an operational semantics for sequential circuits
Foundations of Software Science and Computation Structures
This open access book constitutes the proceedings of the 22nd International Conference on Foundations of Software Science and Computational Structures, FOSSACS 2019, which took place in Prague, Czech Republic, in April 2019, held as part of the European Joint Conference on Theory and Practice of Software, ETAPS 2019. The 29 papers presented in this volume were carefully reviewed and selected from 85 submissions. They deal with foundational research with a clear significance for software science
Recommended from our members
Tripos models of Internal Set Theory
This thesis provides a framework to make sense of models of E. Nelson’s Internal Set Theory (and hence of nonstandard analysis) in elementary toposes by exploiting the technology of tripos theory and Lawvere’s hyperdoctrines. A new doctrinal account of nonstandard phenomena is described, which avoids a few key restrictions in Nelson’s approach: chiefly, the dependence on Set Theory (which is done by replacing a model of set theory with a topos as the starting point) and reliance on an internally defined notion of standard element. From the new perspective, validity of the schemes of Idealisation, Standardisation, and Transfer correspond to the existence of certain relationships between hyperdoctrines, leading to the new notion of a tripos model of IST. After discussing the properties of such models that make them a suitable abstraction of classic IST we explore situations in which such structures arise, leading to two distinct main classes of models: what we refer to as Nelson models, which are those for which there is a well-behaved `predicate of standard elements’ (providing thus a close approximation of classic IST valid for toposes), and models obtained by elaborating on the work started by A. Kock and C. J. Mikkelsen on the categorification of Transfer. We then elaborate upon one particular kind of model of each type: the ultrapower models, which are Nelson models obtained from a choice of adequate ultrafilter and that mirror the constructions of classical Robinson nonstandard analysis, and the localic models for Transfer and Standardisation, which are obtained from any given open surjection of locales (e.g universal covers of topological spaces) via the
Kock-Mikkelsen construction at the level of sheaf toposes.This study was funded by the Coordenação de Aperfeiçoamento
de Pessoal de NÃvel Superior - Brasil (CAPES) (process n∘ 8881.128278/2016-01)
Cartesian closed bicategories: type theory and coherence
In this thesis I lift the Curry--Howard--Lambek correspondence between the simply-typed lambda calculus and cartesian closed categories to the bicategorical setting, then use the resulting type theory to prove a coherence result for cartesian closed bicategories. Cartesian closed bicategories---2-categories `up to isomorphism' equipped with similarly weak products and exponentials---arise in logic, categorical algebra, and game semantics. However, calculations in such bicategories quickly fall into a quagmire of coherence data. I show that there is at most one 2-cell between any parallel pair of 1-cells in the free cartesian closed bicategory on a set and hence---in terms of the difficulty of calculating---bring the data of cartesian closed bicategories down to the familiar level of cartesian closed categories.
In fact, I prove this result in two ways. The first argument is closely related to Power's coherence theorem for bicategories with flexible bilimits. For the second, which is the central preoccupation of this thesis, the proof strategy has two parts: the construction of a type theory, and the proof that it satisfies a form of normalisation I call local coherence. I synthesise the type theory from algebraic principles using a novel generalisation of the (multisorted) abstract clones of universal algebra, called biclones. The result brings together two extensions of the simply-typed lambda calculus: a 2-dimensional type theory in the style of Hilken, which encodes the 2-dimensional nature of a bicategory, and a version of explicit substitution, which encodes a composition operation that is only associative and unital up to isomorphism. For products and exponentials I develop the theory of cartesian and cartesian closed biclones and pursue a connection with the representable multicategories of Hermida. Unlike preceding 2-categorical type theories, in which products and exponentials are encoded by postulating a unit and counit satisfying the triangle laws, the universal properties for products and exponentials are encoded using T. Fiore's biuniversal arrows.
Because the type theory is extracted from the construction of a free biclone, its syntactic model satisfies a suitable 2-dimensional freeness universal property generalising the classical Curry--Howard--Lambek correspondence. One may therefore describe the type theory as an `internal language'. The relationship with the classical situation is made precise by a result establishing that the type theory I construct is the simply-typed lambda calculus up to isomorphism.
This relationship is exploited for the proof of local coherence. It is has been known for some time that one may use the normalisation-by-evaluation strategy to prove the simply-typed lambda calculus is strongly normalising. Using a bicategorical treatment of M. Fiore's categorical analysis of normalisation-by-evaluation, I prove a normalisation result which entails the coherence theorem for cartesian closed bicategories. In contrast to previous coherence results for bicategories, the argument does not rely on the theory of rewriting or strictify using the Yoneda embedding. I prove bicategorical generalisations of a series of well-established category-theoretic results, present a notion of glueing of bicategories, and bicategorify the folklore result providing sufficient conditions for a glueing category to be cartesian closed. Once these prerequisites have been met, the argument is remarkably similar to that in the categorical setting
Coalgebra for the working software engineer
Often referred to as ‘the mathematics of dynamical, state-based systems’, Coalgebra claims to provide a compositional and uniform framework to spec ify, analyse and reason about state and behaviour in computing. This paper addresses this claim by discussing why Coalgebra matters for the design of models and logics for computational phenomena. To a great extent, in this domain one is interested in properties that are preserved along the system’s evolution, the so-called ‘business rules’ or system’s invariants, as well as in liveness requirements, stating that e.g. some desirable outcome will be eventually produced. Both classes are examples of modal assertions, i.e. properties that are to be interpreted across a transition system capturing the system’s dynamics. The relevance of modal reasoning in computing is witnessed by the fact that most university syllabi in the area include some incursion into modal logic, in particular in its temporal variants. The novelty is that, as it happens with the notions of transition, behaviour, or observational equivalence, modalities in Coalgebra acquire a shape . That is, they become parametric on whatever type of behaviour, and corresponding coinduction scheme, seems appropriate for addressing the problem at hand. In this context, the paper revisits Coalgebra from a computational perspective, focussing on three topics central to software design: how systems are modelled, how models are composed, and finally, how properties of their behaviours can be expressed and verified.Fuzziness, as a way to express imprecision, or uncertainty, in computation is an important feature in a number of current application scenarios: from hybrid systems interfacing with sensor networks with error boundaries, to knowledge bases collecting data from often non-coincident human experts. Their abstraction in e.g. fuzzy transition systems led to a number of mathematical structures to model this sort of systems and reason about them. This paper adds two more elements to this family: two modal logics, framed as institutions, to reason about fuzzy transition systems and the corresponding processes. This paves the way to the development, in the second part of the paper, of an associated theory of structured specification for fuzzy computational systems
Foundations of Software Science and Computation Structures
This open access book constitutes the proceedings of the 24th International Conference on Foundations of Software Science and Computational Structures, FOSSACS 2021, which was held during March 27 until April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic. The 28 regular papers presented in this volume were carefully reviewed and selected from 88 submissions. They deal with research on theories and methods to support the analysis, integration, synthesis, transformation, and verification of programs and software systems
Global homotopy theory
This book introduces a new context for global homotopy theory, i.e.,
equivariant homotopy theory with universal symmetries. Many important
equivariant theories naturally exist not just for a particular group, but in a
uniform way for all groups in a specific class. Prominent examples are
equivariant stable homotopy, equivariant -theory or equivariant bordism.
Global equivariant homotopy theory studies such uniform phenomena, i.e., the
adjective `global' refers to simultaneous and compatible actions of all compact
Lie groups.
We give a self-contained treatment of unstable and stable global homotopy
theory, modeled by orthogonal spaces respectively orthogonal spectra under
global equivalences. Specific topics include the global stable homotopy
category, operations on equivariant homotopy groups, global model structures,
and ultra-commutative multiplications. The book includes many explicit examples
and detailed calculations