50 research outputs found

    A Compositional Protocol Verification Using Relativized Bisimulation

    Get PDF
    AbstractThe purpose of this paper is to illustrate a compositional proof method for communicating systems; that is, a method in which a property P of a complete system is demonstrated by first decomposing the system, then demonstrating properties of the subsystems which are strong enough to entail property P for the complete system. In any compositional proof method, it is essential that one can abstract away the behavioural aspects of the subsystem which are irrelevant in the context of the complete system. Our method is an extension of the well established notion of bisimulation; it is called relative bisimulation, and was developed specifically to allow for such abstractions. We illustrate the method in a proof of correctness for a version of the Alternating Bit Protocol

    Explicit fairness in testing semantics

    Get PDF
    In this paper we investigate fair computations in the pi-calculus. Following Costa and Stirling's approach for CCS-like languages, we consider a method to label process actions in order to filter out unfair computations. We contrast the existing fair-testing notion with those that naturally arise by imposing weak and strong fairness. This comparison provides insight about the expressiveness of the various `fair' testing semantics and about their discriminating power.Comment: 27 pages, 1 figure, appeared in LMC

    Applications of Fair Testing

    Get PDF
    In this paper we present the application of the fair testing pre-order, introduced in a previous paper, to the specification and analysis of distributed systems. This pre-order combines some features of the standard testing pre-orders, viz. the possibility to refine a specification by the resolution of nondeterminism, with a powerful feature of standard observation congruence, viz. the fair abstraction from divergences. Moreover, it is a pre-congruence with respect to all standard process-algebraic combinators, thus allowing for the standard algebraic proof techniques by substitution and rewriting. In this paper we will demonstrate advantages of the fair testing pre-order by the application to a number of examples, including a scheduling problem, a version of the Alternating Bit-protocol, and fair communication channels

    Towards a Unified Theory of Timed Automata

    Get PDF
    Timed automata are finite-state machines augmented with special clock variables that reflect the advancement of time. Able to both capture real-time behavior and be verified algorithmically (model-checked), timed automata are used to model real-time systems. These observations have led to the development of several timed-automata verification tools that have been successfully applied to the analysis of a number of different systems; however, the practical utility of timed automata is undermined by the theories underlying different tools differing in subtle but important ways. Since algorithmic results that hold for the variant used by one tool may not apply to another variant, this complicates the application of different tools to different models. The thesis of this dissertation is this: the theory of timed automata can be unified, and a practical unified approach to timed-automata model checking can be built around the paradigm of proof search. First, this dissertation establishes the mutual expressivity of timed automata variants, thereby providing precise characterizations of when theoretical results of one variant apply to other variants. Second, it proves powerful expressive properties about different logics for timed behavior, and as a result, enlarges the set of verifiable properties. Third, it discusses an implementation of a verification tool for an expressive fixpoint-based logic, demonstrating an application of this newly developed theory. The tool is based on a proof-search paradigm; verifying timed automata involves constructing proofs using proof rules that enable verification problems to be translated into subproblems that must be solved. The tool's performance is optimized by using derived proof rules, thereby providing a theoretically sound basis for faster model checking. Last, this dissertation utilizes the proofs generated during verification to gain additional information about the vacuous satisfaction of certain formulae: whether the automaton satisfied a formula by never satisfying certain premises of that specification. This extra information is often obtained without significantly decreasing the verifier's performance

    Encapsulating deontic and branching time specifications

    Get PDF
    In this paper, we investigate formal mechanisms to enable designers to decompose specifications (stated in a given logic) into several interacting components in such a way that the composition of these components preserves their encapsulation and internal non-determinism. The preservation of encapsulation (or locality) enables a modular form of reasoning over specifications, while the conservation of the internal non-determinism is important to guarantee that the branching time properties of components are not lost when the entire system is obtained. The basic ideas come from the work of Fiadeiro and Maibaum where notions from category theory are used to structure logical specifications. As the work of Fiadeiro and Maibaum is stated in a linear temporal logic, here we investigate how to extend these notions to a branching time logic, which can be used to reason about systems where non-determinism is present. To illustrate the practical applications of these ideas, we introduce deontic operators in our logic and we show that the modularization of specifications also allows designers to maintain the encapsulation of deontic prescriptions; this is in particular useful to reason about fault-tolerant systems, as we demonstrate with a small example.Fil: Castro, Pablo Francisco. Universidad Nacional de Río Cuarto; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Córdoba; ArgentinaFil: Maibaum, Thomas S. E.. Mc Master University; Canad

    Modular specifications in process algebra

    Get PDF
    In recent years a wide variety of process algebras has been proposed in the literature. Often these process algebras are closely related: they can be viewed as homomorphic images, submodels or restrictions of each other. The aim of this paper is to show how the semantical reality, consisting of a large number of closely related process algebras, can be reflected, and even used, on the level of algebraic specifications and in process verifications. This is done by means of the notion of a module. The simplest modules are building blocks of operators and axioms, each block describing a feature of concurrency in a certain semantical setting. These modules can then be combined by means of a union operator +, an export operator â–¡, allowing to forget some operators in a module, an operator H, changing semantics by taking homomorphic images, and an operator S which takes subalgebras. These operators enable us to combine modules in a subtle way, when the direct combination would be inconsistent. We show how auxiliary process algebra operators can be hidden when this is needed. Moreover it is demonstrated how new process combinators can be defined in terms of the more elementary ones in a clean way. As an illustration of our approach, a methodology is presented that can be used to specify FIFO-queues, and that facilitates verification of concurrent systems containing these queues

    Fair Testing

    Get PDF
    In this paper we present a solution to the long-standing problem of characterising the coarsest liveness-preserving pre-congruence with respect to a full (TCSP-inspired) process algebra. In fact, we present two distinct characterisations, which give rise to the same relation: an operational one based on a De Nicola-Hennessy-like testing modality which we call should-testing, and a denotational one based on a refined notion of failures. One of the distinguishing characteristics of the should-testing pre-congruence is that it abstracts from divergences in the same way as Milner¿s observation congruence, and as a consequence is strictly coarser than observation congruence. In other words, should-testing has a built-in fairness assumption. This is in itself a property long sought-after; it is in notable contrast to the well-known must-testing of De Nicola and Hennessy (denotationally characterised by a combination of failures and divergences), which treats divergence as catrastrophic and hence is incompatible with observation congruence. Due to these characteristics, should-testing supports modular reasoning and allows to use the proof techniques of observation congruence, but also supports additional laws and techniques. Moreover, we show decidability of should-testing (on the basis of the denotational characterisation). Finally, we demonstrate its advantages by the application to a number of examples, including a scheduling problem, a version of the Alternating Bit-protocol, and fair lossy communication channel

    Self-Similarity Breeds Resilience

    Get PDF
    Self-similarity is the property of a system being similar to a part of itself. We posit that a special class of behaviourally self-similar systems exhibits a degree of resilience to adversarial behaviour. We formalise the notions of system, adversary and resilience in operational terms, based on transition systems and observations. While the general problem of proving systems to be behaviourally self-similar is undecidable, we show, by casting them in the framework of well-structured transition systems, that there is an interesting class of systems for which the problem is decidable. We illustrate our prescriptive framework for resilience with some small examples, e.g., systems robust to failures in a fail-stop model, and those avoiding side-channel attacks

    A parametric analysis of the state-explosion problem in model checking

    Get PDF
    AbstractIn model checking, the state-explosion problem occurs when one checks a nonflat system, i.e., a system implicitly described as a synchronized product of elementary subsystems. In this paper, we investigate the complexity of a wide variety of model-checking problems for nonflat systems under the light of parameterized complexity, taking the number of synchronized components as a parameter. We provide precise complexity measures (in the parameterized sense) for most of the problems we investigate, and evidence that the results are robust
    corecore