156 research outputs found

    A theory of processes with durational actions

    Get PDF
    AbstractA new bisimulation based semantics, called performance equivalence, is proposed for a process algebra equipped with the TCSP parallel operator. This semantics relies on the basic assumption that actions are time-consuming, where their duration is statically fixed. Performance equivalence equates systems whenever they perform the same actions in the same amount of time, thus introducing a simple form of performance evaluation in process algebras. A comparison with other equivalences is provided; in particular, we show that performance equivalence is strictly finer than step bisimulation equivalence and strictly coarser than partial ordering bisimulation equivalence

    A Constraint-based Language for Multiparty Interactions.

    Get PDF
    Abstract Multiparty interactions are common place in today's distributed systems. An agent usually communicates, in a single session, with other agents to accomplish a given task. Take for instance an online transaction including the vendor, the client, the credit card system and the bank. When specifying this kind of system, we probably observe a single transaction including several (binary) communications leading to changes in the state of all the involved agents. Multiway synchronization process calculi, that move from a binary to a multiparty synchronization discipline, have been proposed to formally study the behavior of those systems. However, adopting models such as Bodei, Brodo, and Bruni's Core Network Algebra (CNA), where the number of participants in an interaction is not fixed a priori, leads to an exponential blow-up in the number of states/behaviors that can be observed from the system. In this paper we explore mechanisms to tackle this problem. We extend CNA with constraints that declaratively allow the modeler to restrict the interaction that should actually happen. Our extended process algebra, called CCNA, finds application in balancing the interactions in a concurrent system, leading to a simple, deadlock-free and fair solution for the Dinning Philosopher problem. Our definition of constraints is general enough and it offers the possibility of accumulating costs in a multiparty negotiation. Hence, only computations respecting the thresholds imposed by the modeler are observed. We use this machinery to neatly model a Service Level Agreement protocol. We develop the theory of CCNA including its operational semantics and a behavioral equivalence that we prove to be a congruence. We also propose a prototypical implementation that allows us to verify, automatically, some of the systems explored in the paper

    Advanced reduction techniques for model checking

    Get PDF

    Domain-Aware Session Types

    Get PDF
    We develop a generalization of existing Curry-Howard interpretations of (binary) session types by relying on an extension of linear logic with features from hybrid logic, in particular modal worlds that indicate domains. These worlds govern domain migration, subject to a parametric accessibility relation familiar from the Kripke semantics of modal logic. The result is an expressive new typed process framework for domain-aware, message-passing concurrency. Its logical foundations ensure that well-typed processes enjoy session fidelity, global progress, and termination. Typing also ensures that processes only communicate with accessible domains and so respect the accessibility relation. Remarkably, our domain-aware framework can specify scenarios in which domain information is available only at runtime; flexible accessibility relations can be cleanly defined and statically enforced. As a specific application, we introduce domain-aware multiparty session types, in which global protocols can express arbitrarily nested sub-protocols via domain migration. We develop a precise analysis of these multiparty protocols by reduction to our binary domain-aware framework: complex domain-aware protocols can be reasoned about at the right level of abstraction, ensuring also the principled transfer of key correctness properties from the binary to the multiparty setting

    Software redundancy: what, where, how

    Get PDF
    Software systems have become pervasive in everyday life and are the core component of many crucial activities. An inadequate level of reliability may determine the commercial failure of a software product. Still, despite the commitment and the rigorous verification processes employed by developers, software is deployed with faults. To increase the reliability of software systems, researchers have investigated the use of various form of redundancy. Informally, a software system is redundant when it performs the same functionality through the execution of different elements. Redundancy has been extensively exploited in many software engineering techniques, for example for fault-tolerance and reliability engineering, and in self-adaptive and self- healing programs. Despite the many uses, though, there is no formalization or study of software redundancy to support a proper and effective design of software. Our intuition is that a systematic and formal investigation of software redundancy will lead to more, and more effective uses of redundancy. This thesis develops this intuition and proposes a set of ways to characterize qualitatively as well as quantitatively redundancy. We first formalize the intuitive notion of redundancy whereby two code fragments are considered redundant when they perform the same functionality through different executions. On the basis of this abstract and general notion, we then develop a practical method to obtain a measure of software redundancy. We prove the effectiveness of our measure by showing that it distinguishes between shallow differences, where apparently different code fragments reduce to the same underlying code, and deep code differences, where the algorithmic nature of the computations differs. We also demonstrate that our measure is useful for developers, since it is a good predictor of the effectiveness of techniques that exploit redundancy. Besides formalizing the notion of redundancy, we investigate the pervasiveness of redundancy intrinsically found in modern software systems. Intrinsic redundancy is a form of redundancy that occurs as a by-product of modern design and development practices. We have observed that intrinsic redundancy is indeed present in software systems, and that it can be successfully exploited for good purposes. This thesis proposes a technique to automatically identify equivalent method sequences in software systems to help developers assess the presence of intrinsic redundancy. We demonstrate the effectiveness of the technique by showing that it identifies the majority of equivalent method sequences in a system with good precision and performance

    On the Finitary Characterization of pi-Congruences

    Get PDF
    Some alternative characterizations of late full congruences, either strong or weak, are presented. Those congruences are classically defined by requiring the corresponding ground bisimilarity under all name substitutions. We first improve on those infinitary definitions by showing that congruences can be alternatively characterized in the pi-calculus by sticking to a finitenumber of carefully identified substitutions, and hence, by resorting to only a finite number of ground bisimilarity checks.Then we investigate the same issue in both the ground and the non-ground pi-xsi-calculus, a CCS-like process algebra whose ground version has already been proved to coincide with ground pi-calculus. The pi-xsi-calculus perspective allows processes to be explicitly interpreted as functions of their free names. As aresult, a couple of alternative characterizations of pi-congruences are given, each of them in terms of the bisimilarity of one single pair of pi-xsi-processes. In one case, we exploit lambda-closures of processes, so inducing the effective generationof the substitutions necessary to infer non-ground equivalence. In the other case, a more promising call-by-need discipline for the generation of the wanted substitutions is used. This last strategy is also adopted to show a coincidence result with open semantics. By minor changes, all of the above characterizations for late semantics can be suited for congruences of the early family

    Foundations for Behavioural Model Elaboration Using Modal Transition Systems

    Get PDF
    Modal Transition Systems (MTS) are an extension of Labelled Transition Systems (LTS) that have been shown to be useful to reason about system behaviour in the context of partial information. MTSs distinguish between required, proscribed and unknown behaviour and come equipped with a notion of refinement that supports incremental modelling where unknown behaviour is iteratively elaborated into required or proscribed behaviour. A particularly useful notion in the context of software and requirements engineering is that of “merge”. Merging two consistent models is a process that should result in a minimal common refinement of both models where consistency is defined as the existence of one common refinement. One of the current limitations of MTS merging is that a complete and correct algorithm for merging has not been developed. Hence, an engineer attempting to merge partial descriptions may be prevented to do so by overconstrained algorithms or algorithms that introduce behaviour that does not follow from the partial descriptions being merged. In this thesis we study the problems of consistency and merge for the existing MTSs semantics - strong and weak semantics - and provide a complete characterization of MTS consistency as well as a complete and correct algorithm for MTS merging using these semantics. Strong and weak semantics require MTS models to have the same communicating alphabet, the latter allowing the use of a distinguished unobservable action. In this work we show that the requirement of fixing the alphabet for MTS semantics and the treatment of observable actions are limiting if MTSs are to support incremental elaboration of partial behaviour models. We present a novel observational semantics for MTS, branching alphabet semantics, inspired by branching LTS equivalence, which supports the elaboration of model behaviour including the extension of the alphabet of the system to describe behaviour aspects that previously had not been taken into account. Furthermore, we show that some unintuitive refinements allowed by weak semantics are avoided, and prove a number of theorems that relate branching refinement with alphabet refinement and consistency. These theorems, which do not hold for other semantics, support the argument for considering branching alphabet as a sound semantics to support behaviour model elaboration

    Relational reasoning for effects and handlers

    Get PDF
    This thesis studies relational reasoning techniques for FRANK, a strict functional language supporting algebraic effects and their handlers, within a general, formalised approach for completely characterising observational equivalence. Algebraic effects and handlers are an emerging paradigm for representing computational effects where primitive operations, which give rise to an effect, are primary, and given semantics through their interpretation by effect handlers. FRANK is a novel point in the design space because it recasts effect handling as part of a generalisation of call-by-value function application. Furthermore, FRANK generalises unary effect handlers to the n-ary notion of multihandlers, supporting more elegant expression of certain handlers. There have been recent efforts to develop sound reasoning principles, with respect to observational equivalence, for languages supporting effects and handlers. Such techniques support powerful equational reasoning about code, such as substitution of equivalent sub-terms (‘equals for equals’) in larger programs. However, few studies have considered a complete characterisation of observational equivalence, and its implications for reasoning techniques. Furthermore, there has been no account of reasoning principles for FRANK programs. Our first contribution is a formal reconstruction of a general proof technique, triangulation, for proving completeness results for observational equivalence. The technique brackets observational equivalence between two structural relations, a logical and an applicative notion. We demonstrate the triangulation proof method for a pure simply-typed λ-calculus. We show that such results are readily formalisable in an implementation of type theory, specifically AGDA, using state-of-the-art technology for dealing with syntaxes with binding. Our second contribution is a calculus, ELLA, capturing the essence of FRANK’s novel design. In particular, ELLA supports binary handlers and generalises function application to incorporate effect handling. We extend our triangulation proof technique to this new setting, completely characterising observational equivalence for this calculus. We report on our partial progress in formalising our extension to ELLA in AGDA. Our final contribution is the application of sound reasoning principles, inspired by existing literature, to a variety of ELLA programs, including a proof of associativity for a canonical pipe multihandler. Moreover, we show how leveraging completeness leads, in certain instances, to simpler proofs of observational equivalence
    • 

    corecore