103 research outputs found

    Generic Trace Semantics via Coinduction

    Get PDF
    Trace semantics has been defined for various kinds of state-based systems, notably with different forms of branching such as non-determinism vs. probability. In this paper we claim to identify one underlying mathematical structure behind these "trace semantics," namely coinduction in a Kleisli category. This claim is based on our technical result that, under a suitably order-enriched setting, a final coalgebra in a Kleisli category is given by an initial algebra in the category Sets. Formerly the theory of coalgebras has been employed mostly in Sets where coinduction yields a finer process semantics of bisimilarity. Therefore this paper extends the application field of coalgebras, providing a new instance of the principle "process semantics via coinduction."Comment: To appear in Logical Methods in Computer Science. 36 page

    Logical Relations for Monadic Types

    Full text link
    Logical relations and their generalizations are a fundamental tool in proving properties of lambda-calculi, e.g., yielding sound principles for observational equivalence. We propose a natural notion of logical relations able to deal with the monadic types of Moggi's computational lambda-calculus. The treatment is categorical, and is based on notions of subsconing, mono factorization systems, and monad morphisms. Our approach has a number of interesting applications, including cases for lambda-calculi with non-determinism (where being in logical relation means being bisimilar), dynamic name creation, and probabilistic systems.Comment: 83 page

    A Coalgebraic Semantics for Imperative Programming Languages

    No full text
    In the theory of programming languages, one often takes two complementary perspectives. In operational semantics, one defines and reasons about the behaviour of programs; and in denotational semantics, one abstracts away implementation details, and reasons about programs as mathematical objects or denotations. The denotational semantics should be compositional, meaning that denotations of programs are determined by the denotations of their parts. It should also be adequate with respect to operational equivalence: programs with the same denotation should be behaviourally indistinguishable. One often has to prove adequacy and compositionality independently for different languages, and the proofs are often laborious and repetitive. These proofs were provided systematically in the context of process algebras by the mathematical operational semantics framework of Turi and Plotkin – which represented transition systems as coalgebras, and program syntax by free algebras; operational specifications were given by distributive laws of syntax over behaviour. By framing the semantics on this abstract level, one derives denotational and operational semantics which are guaranteed to be adequate and compositional for a wide variety of examples. However, despite speculation on the possibility, it is hard to apply the framework to programming languages, because one obtains undesirably fine-grained behavioural equivalences, and unconventional notions of operational semantics. Moreover, the behaviour of these languages is often formalised in a different way – such as computational effects, which may be thought of as an interface between programs and external factors such as non-determinism or a variable store; and comodels, or transition systems which implement these effects. This thesis adapts the mathematical operational semantics framework to provide semantics for various classes of programming languages. After identifying the need for such an adaptation, we show how program behaviour may be characterised by final coalgebras in suitably order- enriched Kleisli categories. We define both operational and denotational semantics, first for languages with syntactic effects, and then for languages with effects and/or comodels given by a Lawvere theory. To ensure adequacy and compositionality, we define concrete and abstract operational rule-formats for these languages, based on the idea of evaluation-in-context; we give syntactic and then categorical proofs that those properties are guaranteed by operational specifications in these rule-formats.Open Acces

    Effects and Effect Handlers for Programmable Inference

    Full text link
    Inference algorithms for probabilistic programming are complex imperative programs with many moving parts. Efficient inference often requires customising an algorithm to a particular probabilistic model or problem, sometimes called inference programming. Most inference frameworks are implemented in languages that lack a disciplined approach to side effects, which can result in monolithic implementations where the structure of the algorithms is obscured and inference programming is hard. Functional programming with typed effects offers a more structured and modular foundation for programmable inference, with monad transformers being the primary structuring mechanism explored to date. This paper presents an alternative approach to programmable inference, based on algebraic effects, building on recent work that used algebraic effects to represent probabilistic models. Using effect signatures to specify the key operations of the algorithms, and effect handlers to modularly interpret those operations for specific variants, we develop three abstract algorithms, or inference patterns, representing three important classes of inference: Metropolis-Hastings, particle filtering, and guided optimisation. We show how our approach reveals the algorithms' high-level structure, and makes it easy to tailor and recombine their parts into new variants. We implement the three inference patterns as a Haskell library, and discuss the pros and cons of algebraic effects vis-a-vis monad transformers as a structuring mechanism for modular imperative algorithm design. It should be possible to reimplement our library in any typed functional language able to emulate effects and effect handlers

    Graded Hoare Logic and its Categorical Semantics

    Get PDF
    Deductive verification techniques based on program logics (i.e., the family of Floyd-Hoare logics) are a powerful approach for program reasoning. Recently, there has been a trend of increasing the expressive power of such logics by augmenting their rules with additional information to reason about program side-effects. For example, general program logics have been augmented with cost analyses, logics for probabilistic computations have been augmented with estimate measures, and logics for differential privacy with indistinguishability bounds. In this work, we unify these various approaches via the paradigm of grading, adapted from the world of functional calculi and semantics. We propose Graded Hoare Logic (GHL), a parameterisable framework for augmenting program logics with a preordered monoidal analysis. We develop a semantic framework for modelling GHL such that grading, logical assertions (pre- and post-conditions) and the underlying effectful semantics of an imperative language can be integrated together. Central to our framework is the notion of a graded category which we extend here, introducing graded Freyd categories which provide a semantics that can interpret many examples of augmented program logics from the literature. We leverage coherent fibrations to model the base assertion language, and thus the overall setting is also fibrational

    Functional Query Languages with Categorical Types

    Get PDF
    We study three category-theoretic types in the context of functional query languages (typed lambda-calculi extended with additional operations for bulk data processing). The types we study are:Engineering and Applied Science

    Semantic networks

    Get PDF
    AbstractA semantic network is a graph of the structure of meaning. This article introduces semantic network systems and their importance in Artificial Intelligence, followed by I. the early background; II. a summary of the basic ideas and issues including link types, frame systems, case relations, link valence, abstraction, inheritance hierarchies and logic extensions; and III. a survey of ‘world-structuring’ systems including ontologies, causal link models, continuous models, relevance, formal dictionaries, semantic primitives and intersecting inference hierarchies. Speed and practical implementation are briefly discussed. The conclusion argues for a synthesis of relational graph theory, graph-grammar theory and order theory based on semantic primitives and multiple intersecting inference hierarchies

    Categorical Modelling of Logic Programming: Coalgebra, Functorial Semantics, String Diagrams

    Get PDF
    Logic programming (LP) is driven by the idea that logic subsumes computation. Over the past 50 years, along with the emergence of numerous logic systems, LP has also grown into a large family, the members of which are designed to deal with various computation scenarios. Among them, we focus on two of the most influential quantitative variants are probabilistic logic programming (PLP) and weighted logic programming (WLP). In this thesis, we investigate a uniform understanding of logic programming and its quan- titative variants from the perspective of category theory. In particular, we explore both a coalgebraic and an algebraic understanding of LP, PLP and WLP. On the coalgebraic side, we propose a goal-directed strategy for calculating the probabilities and weights of atoms in PLP and WLP programs, respectively. We then develop a coalgebraic semantics for PLP and WLP, built on existing coalgebraic semantics for LP. By choosing the appropriate functors representing probabilistic and weighted computation, such coalgeraic semantics characterise exactly the goal-directed behaviour of PLP and WLP programs. On the algebraic side, we define a functorial semantics of LP, PLP, and WLP, such that they three share the same syntactic categories of string diagrams, and differ regarding to the semantic categories according to their data/computation type. This allows for a uniform diagrammatic expression for certain semantic constructs. Moreover, based on similar approaches to Bayesian networks, this provides a framework to formalise the connection between PLP and Bayesian networks. Furthermore, we prove a sound and complete aximatization of the semantic category for LP, in terms of string diagrams. Together with the diagrammatic presentation of the fixed point semantics, one obtain a decidable calculus for proving the equivalence between propositional definite logic programs
    corecore