966 research outputs found

    The Geometry of Concurrent Interaction: Handling Multiple Ports by Way of Multiple Tokens (Long Version)

    Get PDF
    We introduce a geometry of interaction model for Mazza's multiport interaction combinators, a graph-theoretic formalism which is able to faithfully capture concurrent computation as embodied by process algebras like the π\pi-calculus. The introduced model is based on token machines in which not one but multiple tokens are allowed to traverse the underlying net at the same time. We prove soundness and adequacy of the introduced model. The former is proved as a simulation result between the token machines one obtains along any reduction sequence. The latter is obtained by a fine analysis of convergence, both in nets and in token machines

    Cost Preserving Bisimulations for Probabilistic Automata

    Full text link
    Probabilistic automata constitute a versatile and elegant model for concurrent probabilistic systems. They are equipped with a compositional theory supporting abstraction, enabled by weak probabilistic bisimulation serving as the reference notion for summarising the effect of abstraction. This paper considers probabilistic automata augmented with costs. It extends the notions of weak transitions in probabilistic automata in such a way that the costs incurred along a weak transition are captured. This gives rise to cost-preserving and cost-bounding variations of weak probabilistic bisimilarity, for which we establish compositionality properties with respect to parallel composition. Furthermore, polynomial-time decision algorithms are proposed, that can be effectively used to compute reward-bounding abstractions of Markov decision processes in a compositional manner

    Flexible refinement

    Get PDF
    To help make refinement more usable in practice we introduce a general, flexible model of refinement. This is defined in terms of what contexts an entity can appear in, and what observations can be made of it in those contexts. Our general model is expressed in terms of an operational semantics, and by exploiting the well-known isomorphism between state-based relational semantics and event-based labelled transition semantics we were able to take particular models from both the state- and event-based literature, reflect on them and gradually evolve our general model. We are also able to view our general model both as a testing semantics and as a logical theory with refinement as implication. Our general model can used as a bridge between different particular special models and using this bridge we compare the definition of determinism found in different special models. We do this because the reduction of nondeterminism underpins many definitions of refinement found in a variety of special models. To our surprise we find that the definition of determinism commonly used in the process algebra literature to be at odds with determinism as defined in other special models. In order to rectify this situation we return to the intuitions expressed by Milner in CCS and by formalising these intuitions we are able to define determinism in process algebra in such a way that it no longer at odds with the definitions we have taken from other special models. Using our abstract definition of determinism we are able to construct a new model, interactive branching programs, that is an implementable subset of process algebra. Later in the chapter we show explicitly how five special models, taken from the literature, are instances of our general model. This is done simply by fixing the sets of contexts and observations involved. Next we define vertical refinement on our general model. Vertical refinement can be seen both as a generalisation of what, in the literature, has been called action refinement or non-atomic refinement. Alternatively, by viewing a layer as a logical theory, vertical refinement is a theory morphism, formalised as a Galois connection. By constructing a vertical refinement between broadcast processes and interactive branching programs we can see how interactive branching programs can be implemented on a platform providing broadcast communication. But we have been unable to extend this theory morphism to implement all of process algebra using broadcast communication. Upon investigation we show the problem arises with the examples that caused the problem with the definition of determinism on process algebra. Finally we illustrate the usefulness of our flexible general model by formally developing a single entity that contains events that use handshake communication and events that use broadcast communication

    Behavioural hybrid process calculus

    Get PDF
    Process algebra is a theoretical framework for the modelling and analysis of the behaviour of concurrent discrete event systems that has been developed within computer science in past quarter century. It has generated a deeper nderstanding of the nature of concepts such as observable behaviour in the presence of nondeterminism, system composition by interconnection of concurrent component systems, and notions of behavioural equivalence of such systems. It has contributed fundamental concepts such as bisimulation, and has been successfully used in a wide range of problems and practical applications in concurrent systems. We believe that the basic tenets of process algebra are highly compatible with the behavioural approach to dynamical systems. In our contribution we present an extension of classical process algebra that is suitable for the modelling and analysis of continuous and hybrid dynamical systems. It provides a natural framework for the concurrent composition of such systems, and can deal with nondeterministic behaviour that may arise from the occurrence of internal switching events. Standard process algebraic techniques lead to the characterisation of the observable behaviour of such systems as equivalence classes under some suitably adapted notion of bisimulation

    Compositional competitiveness for distributed algorithms

    Full text link
    We define a measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of Ajtai et al., which measures how quickly an algorithm can finish tasks that start at specified times. The novel feature of the throughput measure, which distinguishes it from the latency measure, is that it is compositional: it supports a notion of algorithms that are competitive relative to a class of subroutines, with the property that an algorithm that is k-competitive relative to a class of subroutines, combined with an l-competitive member of that class, gives a combined algorithm that is kl-competitive. In particular, we prove the throughput-competitiveness of a class of algorithms for collect operations, in which each of a group of n processes obtains all values stored in an array of n registers. Collects are a fundamental building block of a wide variety of shared-memory distributed algorithms, and we show that several such algorithms are competitive relative to collects. Inserting a competitive collect in these algorithms gives the first examples of competitive distributed algorithms obtained by composition using a general construction.Comment: 33 pages, 2 figures; full version of STOC 96 paper titled "Modular competitiveness for distributed algorithms.
    corecore