12 research outputs found

    On Kleene Algebra vs. Process Algebra

    Full text link
    We try to clarify the relationship between Kleene algebra and process algebra, based on the very recent work on Kleene algebra and process algebra. Both for concurrent Kleene algebra (CKA) with communications and truly concurrent process algebra APTC with Kleene star and parallel star, the extended Milner's expansion law a∥b=a⋅b+b⋅a+a∥b+a∣ba\parallel b=a\cdot b+b\cdot a+a\parallel b +a\mid b holds, with a,ba,b being primitives (atomic actions), ∥\parallel being the parallel composition, ++ being the alternative composition, ⋅\cdot being the sequential composition and the communication merge ∣\mid with the background of computation. CKA and APTC are all the truly concurrent computation models, can have the same syntax (primitives and operators), maybe have the same or different semantics

    Fixed-point elimination in the intuitionistic propositional calculus

    Full text link
    It is a consequence of existing literature that least and greatest fixed-points of monotone polynomials on Heyting algebras-that is, the algebraic models of the Intuitionistic Propositional Calculus-always exist, even when these algebras are not complete as lattices. The reason is that these extremal fixed-points are definable by formulas of the IPC. Consequently, the μ\mu-calculus based on intuitionistic logic is trivial, every μ\mu-formula being equivalent to a fixed-point free formula. We give in this paper an axiomatization of least and greatest fixed-points of formulas, and an algorithm to compute a fixed-point free formula equivalent to a given μ\mu-formula. The axiomatization of the greatest fixed-point is simple. The axiomatization of the least fixed-point is more complex, in particular every monotone formula converges to its least fixed-point by Kleene's iteration in a finite number of steps, but there is no uniform upper bound on the number of iterations. We extract, out of the algorithm, upper bounds for such n, depending on the size of the formula. For some formulas, we show that these upper bounds are polynomial and optimal

    Program Repair by Stepwise Correctness Enhancement

    Full text link
    Relative correctness is the property of a program to be more-correct than another with respect to a given specification. Whereas the traditional definition of (absolute) correctness divides candidate program into two classes (correct, and incorrect), relative correctness arranges candidate programs on the richer structure of a partial ordering. In other venues we discuss the impact of relative correctness on program derivation, and on program verification. In this paper, we discuss the impact of relative correctness on program testing; specifically, we argue that when we remove a fault from a program, we ought to test the new program for relative correctness over the old program, rather than for absolute correctness. We present analytical arguments to support our position, as well as an empirical argument in the form of a small program whose faults are removed in a stepwise manner as its relative correctness rises with each fault removal until we obtain a correct program.Comment: In Proceedings PrePost 2016, arXiv:1605.0809

    On Star Expressions and Completeness Theorems

    Get PDF
    An open problem posed by Milner asks for a proof that a certain axiomatisation, which Milner showed is sound with respect to bisimilarity for regular expressions, is also complete. One of the main difficulties of the problem is the lack of a full Kleene theorem, since there are automata that can not be specified, up to bisimilarity, by an expression. Grabmayer and Fokkink (2020) characterise those automata that can be expressed by regular expressions without the constant 1, and use this characterisation to give a positive answer to Milner's question for this subset of expressions. In this paper, we analyse Grabmayer and Fokkink's proof of completeness from the perspective of universal coalgebra, and thereby give an abstract account of their proof method. We then compare this proof method to another approach to completeness proofs from coalgebraic language theory. This culminates in two abstract proof methods for completeness, what we call the local and global approaches, and a description of when one method can be used in place of the other

    Continuity as a computational effect

    Get PDF
    The original purpose of component-based development was to provide techniques to master complex software, through composition, reuse and parametrisation. However, such systems are rapidly moving towards a level in which software becomes prevalently intertwined with (continuous) physical processes. A possible way to accommodate the latter in component calculi relies on a suitable encoding of continuous behaviour as (yet another) computational effect. This paper introduces such an encoding through a monad which, in the compositional development of hybrid systems, may play a role similar to the one played by 1+, powerset, and distribution monads in the characterisation of partial, nondeterministic and probabilistic components, respectively. This monad and its Kleisli category provide a universe in which the effects of continuity over (different forms of) composition can be suitably studied.This work is financed by the ERDF - European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation - COMPETE 2020 Programme and by National Funds through the Portuguese funding agency, FCT - Fundacao para a Ciencia e a Tecnologia within project POCI-01-0145-FEDER-016692.The first author is also sponsored by FCT grant SFRH/BD/52234/2013, and the second one by FCT grant SFRH/BSAB/113890/2015. Moreover, D. Hofmann and M. Martins are supported by the EU FP7 Marie Curie PIRSES-GA-2012-318986 project GeTFun: Generalizing Truth-Functionality and FCT project UID/MAT/04106/2013 through CIDMA

    Fixed-point elimination in the Intuitionistic Propositional Calculus (extended version)

    Get PDF
    It is a consequence of existing literature that least and greatest fixed-points of monotone polynomials on Heyting algebras-that is, the alge- braic models of the Intuitionistic Propositional Calculus-always exist, even when these algebras are not complete as lattices. The reason is that these extremal fixed-points are definable by formulas of the IPC. Consequently, the μ\mu-calculus based on intuitionistic logic is trivial, every μ\mu-formula being equiv- alent to a fixed-point free formula. We give in this paper an axiomatization of least and greatest fixed-points of formulas, and an algorithm to compute a fixed-point free formula equivalent to a given μ\mu-formula. The axiomatization of the greatest fixed-point is simple. The axiomatization of the least fixed- point is more complex, in particular every monotone formula converges to its least fixed-point by Kleene's iteration in a finite number of steps, but there is no uniform upper bound on the number of iterations. We extract, out of the algorithm, upper bounds for such n, depending on the size of the formula. For some formulas, we show that these upper bounds are polynomial and optimal.Comment: extended version of arXiv:1601.0040

    Coalgebra for the working software engineer

    Get PDF
    Often referred to as ‘the mathematics of dynamical, state-based systems’, Coalgebra claims to provide a compositional and uniform framework to spec ify, analyse and reason about state and behaviour in computing. This paper addresses this claim by discussing why Coalgebra matters for the design of models and logics for computational phenomena. To a great extent, in this domain one is interested in properties that are preserved along the system’s evolution, the so-called ‘business rules’ or system’s invariants, as well as in liveness requirements, stating that e.g. some desirable outcome will be eventually produced. Both classes are examples of modal assertions, i.e. properties that are to be interpreted across a transition system capturing the system’s dynamics. The relevance of modal reasoning in computing is witnessed by the fact that most university syllabi in the area include some incursion into modal logic, in particular in its temporal variants. The novelty is that, as it happens with the notions of transition, behaviour, or observational equivalence, modalities in Coalgebra acquire a shape . That is, they become parametric on whatever type of behaviour, and corresponding coinduction scheme, seems appropriate for addressing the problem at hand. In this context, the paper revisits Coalgebra from a computational perspective, focussing on three topics central to software design: how systems are modelled, how models are composed, and finally, how properties of their behaviours can be expressed and verified.Fuzziness, as a way to express imprecision, or uncertainty, in computation is an important feature in a number of current application scenarios: from hybrid systems interfacing with sensor networks with error boundaries, to knowledge bases collecting data from often non-coincident human experts. Their abstraction in e.g. fuzzy transition systems led to a number of mathematical structures to model this sort of systems and reason about them. This paper adds two more elements to this family: two modal logics, framed as institutions, to reason about fuzzy transition systems and the corresponding processes. This paves the way to the development, in the second part of the paper, of an associated theory of structured specification for fuzzy computational systems

    Relational lattices: from databases to Universal Algebra

    Get PDF
    Relational lattices are obtained by interpreting lattice connectives as natural join and inner union between database relations. Our study of their equational theory reveals that the variety generated by relational lattices has not been discussed in the existing literature. Furthermore, we show that addition of just the header constant to the lattice signature leads to undecidability of the quasiequational theory. Nevertheless, we also demonstrate that relational lattices are not as intangible as one may fear: for example, they do form a pseudoelementary class. We also apply the tools of Formal Concept Analysis and investigate the structure of relational lattices via their standard contexts. Furthermore, we show that the addition of typing rules and singleton constants allows a direct comparison with the monotonic relational expressions of Sagiv and Yannakakis

    Algebraic Neural Architecture Representation, Evolutionary Neural Architecture Search, and Novelty Search in Deep Reinforcement Learning

    Get PDF
    Evolutionary algorithms have recently re-emerged as powerful tools for machine learning and artificial intelligence, especially when combined with advances in deep learning developed over the last decade. In contrast to the use of fixed architectures and rigid learning algorithms, we leveraged the open-endedness of evolutionary algorithms to make both theoretical and methodological contributions to deep reinforcement learning. This thesis explores and develops two major areas at the intersection of evolutionary algorithms and deep reinforcement learning: generative network architectures and behaviour-based optimization. Over three distinct contributions, both theoretical and experimental methods were applied to deliver a novel mathematical framework and experimental method for generative, modular neural network architecture search for reinforcement learning, and a generalized formulation of a behaviour- based optimization framework for reinforcement learning called novelty search. Experimental results indicate that both alternative, behaviour-based optimization and neural architecture search can each be used to improve learning in the popular Atari 2600 benchmark compared to DQN — a popular gradient-based method. These results are in-line with related work demonstrating that strictly gradient-free methods are competitive with gradient-based reinforcement learning. These contributions, together with other successful combinations of evolutionary algorithms and deep learning, demonstrate that alternative architectures and learning algorithms to those conventionally used in deep learning should be seriously investigated in an effort to drive progress in artificial intelligence
    corecore