137 research outputs found

    Scalable Minimization Algorithm for Partial Bisimulation

    Full text link
    We present an efficient algorithm for computing the partial bisimulation preorder and equivalence for labeled transitions systems. The partial bisimulation preorder lies between simulation and bisimulation, as only a part of the set of actions is bisimulated, whereas the rest of the actions are simulated. Computing quotients for simulation equivalence is more expensive than for bisimulation equivalence, as for simulation one has to account for the so-called little brothers, which represent classes of states that can simulate other classes. It is known that in the absence of little brother states, (partial bi)simulation and bisimulation coincide, but still the complexity of existing minimization algorithms for simulation and bisimulation does not scale. Therefore, we developed a minimization algorithm and an accompanying tool that scales with respect to the bisimulated action subset.Comment: In Proceedings WS-FMDS 2012, arXiv:1207.184

    Real and stochastic time in process algebras for performance evaluation

    Get PDF
    Process algebras are formalisms for abstract modeling of systems for the purpose of qualitative veri¯cation and quantitative evaluation. The purpose of veri¯cation is to show that the system behaves correctly, e.g., it does not contain a deadlock or a state with some desired property is eventually going to be reached. The quantitative or performance evaluation part gives an approximation how well the system will behave, e.g., the average time of a message to get through is 10 time units or the utilization (percentage of time that something is used) of some machine is 23.5 percent. Originally, process algebras were only developed for qualitative model- ing, but gradually they have been extended with time, probabilities, and Markovian (exponential) and generally-distributed stochastic time. The ex- tensions up to stochastic time typically conservatively extended previous well-established theories. However, mostly due to the nature of the under- lying (non-)Markovian performance models, the stochastic process algebras were built from scratch. These extensions were carried out as orthogonal extensions of untimed process theories with exponential delays or stochastic clocks. The underlying performance model is obtained by abstracting from the qualitative behavior using some weak behavioral equivalence. The thesis investigates several issues: (1) What is the relationship be- tween discrete real and generally-distributed stochastic time in the process theories? (2) Is it possible, and if so, how, to extend timed process theories with stochastic time? (3) Reversely, is it possible, and if so, how, to embed discrete real time in generally distributed process theories? Additionally, (4) is the abstraction using the weak behavioral equivalence in Markovian process theories (and other modeling formalisms as well) performance pre- serving, and is such an approach compositional? In the end, (5) how can we do performance analysis using discrete-time and probabilistic choices? The contents of the thesis is as follows. First, we introduce the central concept of a race condition that de¯nes the interaction between stochastic timed delays. We introduce a new type of race condition, which enables the synchronization of stochastic delays with the same sample as in timed process theories. This gives the basis for the notion of a timed delay in a racing context, which models the expiration of stochastic delays. In this new setting, we de¯ne a strong bisimulation relation that deals with the (probabilistic) race condition on a symbolic level. Next, we show how to derive stochastic delays as guarded recursive speci¯cation involving timed delays in a racing context and we derive a ground-complete stochastic-time process theory. Then, we take the opposite viewpoint and we develop a stochastic process theory from scratch, relying on the same interpretation of the race condition. We embed real time in the stochastic-time setting by using context-sensitive interpolation, a restricted notion of time additiv- ity. Afterwards, we turn to Markovian process theories and we show com- positionality of the Markov reward chains with fast and silent transitions with respect to lumping-based and reduction-based aggregation methods. These methods can be used to show preservation of performance measures when eliminating probabilistic choices and non-deterministic silent steps in Markovian process theories. Then, we specify the underlying model of prob- abilistic timed process theories as a discrete-time probabilistic reward graph and we show its transformation to a discrete-time Markov reward chain. The approach is illustrated by extending the environment of the modeling language Â. The developed theories are illustrated by specifying a version of the concurrent alternating bit protocol and analyzing it in the  toolset

    Real and stochastic time in process algebras for performance evaluation

    Get PDF
    Process algebras are formalisms for abstract modeling of systems for the purpose of qualitative veri¯cation and quantitative evaluation. The purpose of veri¯cation is to show that the system behaves correctly, e.g., it does not contain a deadlock or a state with some desired property is eventually going to be reached. The quantitative or performance evaluation part gives an approximation how well the system will behave, e.g., the average time of a message to get through is 10 time units or the utilization (percentage of time that something is used) of some machine is 23.5 percent. Originally, process algebras were only developed for qualitative model- ing, but gradually they have been extended with time, probabilities, and Markovian (exponential) and generally-distributed stochastic time. The ex- tensions up to stochastic time typically conservatively extended previous well-established theories. However, mostly due to the nature of the under- lying (non-)Markovian performance models, the stochastic process algebras were built from scratch. These extensions were carried out as orthogonal extensions of untimed process theories with exponential delays or stochastic clocks. The underlying performance model is obtained by abstracting from the qualitative behavior using some weak behavioral equivalence. The thesis investigates several issues: (1) What is the relationship be- tween discrete real and generally-distributed stochastic time in the process theories? (2) Is it possible, and if so, how, to extend timed process theories with stochastic time? (3) Reversely, is it possible, and if so, how, to embed discrete real time in generally distributed process theories? Additionally, (4) is the abstraction using the weak behavioral equivalence in Markovian process theories (and other modeling formalisms as well) performance pre- serving, and is such an approach compositional? In the end, (5) how can we do performance analysis using discrete-time and probabilistic choices? The contents of the thesis is as follows. First, we introduce the central concept of a race condition that de¯nes the interaction between stochastic timed delays. We introduce a new type of race condition, which enables the synchronization of stochastic delays with the same sample as in timed process theories. This gives the basis for the notion of a timed delay in a racing context, which models the expiration of stochastic delays. In this new setting, we de¯ne a strong bisimulation relation that deals with the (probabilistic) race condition on a symbolic level. Next, we show how to derive stochastic delays as guarded recursive speci¯cation involving timed delays in a racing context and we derive a ground-complete stochastic-time process theory. Then, we take the opposite viewpoint and we develop a stochastic process theory from scratch, relying on the same interpretation of the race condition. We embed real time in the stochastic-time setting by using context-sensitive interpolation, a restricted notion of time additiv- ity. Afterwards, we turn to Markovian process theories and we show com- positionality of the Markov reward chains with fast and silent transitions with respect to lumping-based and reduction-based aggregation methods. These methods can be used to show preservation of performance measures when eliminating probabilistic choices and non-deterministic silent steps in Markovian process theories. Then, we specify the underlying model of prob- abilistic timed process theories as a discrete-time probabilistic reward graph and we show its transformation to a discrete-time Markov reward chain. The approach is illustrated by extending the environment of the modeling language Â. The developed theories are illustrated by specifying a version of the concurrent alternating bit protocol and analyzing it in the  toolset

    Real and stochastic time in process algebras for performance evaluation

    Get PDF
    Process algebras are formalisms for abstract modeling of systems for the purpose of qualitative veri¯cation and quantitative evaluation. The purpose of veri¯cation is to show that the system behaves correctly, e.g., it does not contain a deadlock or a state with some desired property is eventually going to be reached. The quantitative or performance evaluation part gives an approximation how well the system will behave, e.g., the average time of a message to get through is 10 time units or the utilization (percentage of time that something is used) of some machine is 23.5 percent. Originally, process algebras were only developed for qualitative model- ing, but gradually they have been extended with time, probabilities, and Markovian (exponential) and generally-distributed stochastic time. The ex- tensions up to stochastic time typically conservatively extended previous well-established theories. However, mostly due to the nature of the under- lying (non-)Markovian performance models, the stochastic process algebras were built from scratch. These extensions were carried out as orthogonal extensions of untimed process theories with exponential delays or stochastic clocks. The underlying performance model is obtained by abstracting from the qualitative behavior using some weak behavioral equivalence. The thesis investigates several issues: (1) What is the relationship be- tween discrete real and generally-distributed stochastic time in the process theories? (2) Is it possible, and if so, how, to extend timed process theories with stochastic time? (3) Reversely, is it possible, and if so, how, to embed discrete real time in generally distributed process theories? Additionally, (4) is the abstraction using the weak behavioral equivalence in Markovian process theories (and other modeling formalisms as well) performance pre- serving, and is such an approach compositional? In the end, (5) how can we do performance analysis using discrete-time and probabilistic choices? The contents of the thesis is as follows. First, we introduce the central concept of a race condition that de¯nes the interaction between stochastic timed delays. We introduce a new type of race condition, which enables the synchronization of stochastic delays with the same sample as in timed process theories. This gives the basis for the notion of a timed delay in a racing context, which models the expiration of stochastic delays. In this new setting, we de¯ne a strong bisimulation relation that deals with the (probabilistic) race condition on a symbolic level. Next, we show how to derive stochastic delays as guarded recursive speci¯cation involving timed delays in a racing context and we derive a ground-complete stochastic-time process theory. Then, we take the opposite viewpoint and we develop a stochastic process theory from scratch, relying on the same interpretation of the race condition. We embed real time in the stochastic-time setting by using context-sensitive interpolation, a restricted notion of time additiv- ity. Afterwards, we turn to Markovian process theories and we show com- positionality of the Markov reward chains with fast and silent transitions with respect to lumping-based and reduction-based aggregation methods. These methods can be used to show preservation of performance measures when eliminating probabilistic choices and non-deterministic silent steps in Markovian process theories. Then, we specify the underlying model of prob- abilistic timed process theories as a discrete-time probabilistic reward graph and we show its transformation to a discrete-time Markov reward chain. The approach is illustrated by extending the environment of the modeling language Â. The developed theories are illustrated by specifying a version of the concurrent alternating bit protocol and analyzing it in the  toolset

    Aggregation methods for Markov reward chains with fast and silent transitions

    Get PDF

    Communicating Processes with Data for Supervisory Coordination

    Full text link
    We employ supervisory controllers to safely coordinate high-level discrete(-event) behavior of distributed components of complex systems. Supervisory controllers observe discrete-event system behavior, make a decision on allowed activities, and communicate the control signals to the involved parties. Models of the supervisory controllers can be automatically synthesized based on formal models of the system components and a formalization of the safe coordination (control) requirements. Based on the obtained models, code generation can be used to implement the supervisory controllers in software, on a PLC, or an embedded (micro)processor. In this article, we develop a process theory with data that supports a model-based systems engineering framework for supervisory coordination. We employ communication to distinguish between the different flows of information, i.e., observation and supervision, whereas we employ data to specify the coordination requirements more compactly, and to increase the expressivity of the framework. To illustrate the framework, we remodel an industrial case study involving coordination of maintenance procedures of a printing process of a high-tech Oce printer.Comment: In Proceedings FOCLASA 2012, arXiv:1208.432

    Strong, Weak and Branching Bisimulation for Transition Systems and Markov Reward Chains: A Unifying Matrix Approach

    Full text link
    We first study labeled transition systems with explicit successful termination. We establish the notions of strong, weak, and branching bisimulation in terms of boolean matrix theory, introducing thus a novel and powerful algebraic apparatus. Next we consider Markov reward chains which are standardly presented in real matrix theory. By interpreting the obtained matrix conditions for bisimulations in this setting, we automatically obtain the definitions of strong, weak, and branching bisimulation for Markov reward chains. The obtained strong and weak bisimulations are shown to coincide with some existing notions, while the obtained branching bisimulation is new, but its usefulness is questionable

    Towards a concurrency theory for supervisory control

    Get PDF
    In this paper we propose a process-theoretic concurrency model to express supervisory control properties. In light of the present importance of reliable control software, the current work ow of direct conversion from informal specication documents to control software implementations can be improved. A separate modeling step in terms of controllable and uncontrollable behavior of the device under control is desired. We consider the control loop as a feedback model for supervisory control, in terms of the three distinct components of plant, requirements and supervisor. With respect to the control ow, we consider event-based models as well as state-based ones. We study the process theory TCP as a convenient modeling formalism that includes parallelism, iteration, communication features and non-determinism. Via structural operational semantics, we relate the terms in TCP to labeled transition systems. We consider the partial bisimulation preorder to express controllability that is better suited to handle non-determinism, compared to bisimulation-based models. It is shown how precongruence of partial bisimulation can be derived from the format of the deduction rules. The theory of TCP is studied under nite axiomatization for which soundness and ground-completeness (modulo iteration) is proved with respect to partial bisimulation. Language-based controllability, as the neccesary condition for event-based supervisory control is expressed in terms of partial bisimulation and we discuss several drawbacks of the strict event-based approach. Statebased control is considered under partial bisimulation as a dependable solution to address non-determinism. An appropriate renaming operator is introduced to address an issue in parallel communication. A case for automated guided vehicles (AGV) is modeled using the theory TCP. The latter theory is henceforth extended to include state-based valuations for which partial bisimulation and an axiomatization are dened. We consider an extended case on industrial printers to show the modeling abilities of this extended theory. In our concluding remarks, we sketch a future research path in terms of a new formal language for concurrent control modeling

    Partial bisimulation

    Get PDF
    corecore