1,872 research outputs found

    Refinement Modal Logic

    Full text link
    In this paper we present {\em refinement modal logic}. A refinement is like a bisimulation, except that from the three relational requirements only `atoms' and `back' need to be satisfied. Our logic contains a new operator 'all' in addition to the standard modalities 'box' for each agent. The operator 'all' acts as a quantifier over the set of all refinements of a given model. As a variation on a bisimulation quantifier, this refinement operator or refinement quantifier 'all' can be seen as quantifying over a variable not occurring in the formula bound by it. The logic combines the simplicity of multi-agent modal logic with some powers of monadic second-order quantification. We present a sound and complete axiomatization of multi-agent refinement modal logic. We also present an extension of the logic to the modal mu-calculus, and an axiomatization for the single-agent version of this logic. Examples and applications are also discussed: to software verification and design (the set of agents can also be seen as a set of actions), and to dynamic epistemic logic. We further give detailed results on the complexity of satisfiability, and on succinctness

    Sigref ā€“ A Symbolic Bisimulation Tool Box

    Get PDF
    We present a uniform signature-based approach to compute the most popular bisimulations. Our approach is implemented symbolically using BDDs, which enables the handling of very large transition systems. Signatures for the bisimulations are built up from a few generic building blocks, which naturally correspond to efficient BDD operations. Thus, the definition of an appropriate signature is the key for a rapid development of algorithms for other types of bisimulation. We provide experimental evidence of the viability of this approach by presenting computational results for many bisimulations on real-world instances. The experiments show cases where our framework can handle state spaces efficiently that are far too large to handle for any tool that requires an explicit state space description. This work was partly supported by the German Research Council (DFG) as part of the Transregional Collaborative Research Center ā€œAutomatic Verification and Analysis of Complex Systemsā€ (SFB/TR 14 AVACS). See www.avacs.org for more information

    Correct and Efficient Antichain Algorithms for Refinement Checking

    Get PDF
    The notion of refinement plays an important role in software engineering. It is the basis of a stepwise development methodology in which the correctness of a system can be established by proving, or computing, that a system refines its specification. Wang et al. describe algorithms based on antichains for efficiently deciding trace refinement, stable failures refinement and failures-divergences refinement. We identify several issues pertaining to the soundness and performance in these algorithms and propose new, correct, antichain-based algorithms. Using a number of experiments we show that our algorithms outperform the original ones in terms of running time and memory usage. Furthermore, we show that additional run time improvements can be obtained by applying divergence-preserving branching bisimulation minimisation

    Distributed Branching Bisimulation Minimization by Inductive Signatures

    Get PDF
    We present a new distributed algorithm for state space minimization modulo branching bisimulation. Like its predecessor it uses signatures for refinement, but the refinement process and the signatures have been optimized to exploit the fact that the input graph contains no tau-loops. The optimization in the refinement process is meant to reduce both the number of iterations needed and the memory requirements. In the former case we cannot prove that there is an improvement, but our experiments show that in many cases the number of iterations is smaller. In the latter case, we can prove that the worst case memory use of the new algorithm is linear in the size of the state space, whereas the old algorithm has a quadratic upper bound. The paper includes a proof of correctness of the new algorithm and the results of a number of experiments that compare the performance of the old and the new algorithms

    Probabilistic Bisimulations for PCTL Model Checking of Interval MDPs

    Full text link
    Verification of PCTL properties of MDPs with convex uncertainties has been investigated recently by Puggelli et al. However, model checking algorithms typically suffer from state space explosion. In this paper, we address probabilistic bisimulation to reduce the size of such an MDPs while preserving PCTL properties it satisfies. We discuss different interpretations of uncertainty in the models which are studied in the literature and that result in two different definitions of bisimulations. We give algorithms to compute the quotients of these bisimulations in time polynomial in the size of the model and exponential in the uncertain branching. Finally, we show by a case study that large models in practice can have small branching and that a substantial state space reduction can be achieved by our approach.Comment: In Proceedings SynCoP 2014, arXiv:1403.784

    A Polynomial Time Algorithm for Deciding Branching Bisimilarity on Totally Normed BPA

    Full text link
    Strong bisimilarity on normed BPA is polynomial-time decidable, while weak bisimilarity on totally normed BPA is NP-hard. It is natural to ask where the computational complexity of branching bisimilarity on totally normed BPA lies. This paper confirms that this problem is polynomial-time decidable. To our knowledge, in the presence of silent transitions, this is the first bisimilarity checking algorithm on infinite state systems which runs in polynomial time. This result spots an instance in which branching bisimilarity and weak bisimilarity are both decidable but lie in different complexity classes (unless NP=P), which is not known before. The algorithm takes the partition refinement approach and the final implementation can be thought of as a generalization of the previous algorithm of Czerwi\'{n}ski and Lasota. However, unexpectedly, the correctness of the algorithm cannot be directly generalized from previous works, and the correctness proof turns out to be subtle. The proof depends on the existence of a carefully defined refinement operation fitted for our algorithm and the proposal of elaborately developed techniques, which are quite different from previous works.Comment: 32 page

    Compositional Performance Modelling with the TIPPtool

    Get PDF
    Stochastic process algebras have been proposed as compositional specification formalisms for performance models. In this paper, we describe a tool which aims at realising all beneficial aspects of compositional performance modelling, the TIPPtool. It incorporates methods for compositional specification as well as solution, based on state-of-the-art techniques, and wrapped in a user-friendly graphical front end. Apart from highlighting the general benefits of the tool, we also discuss some lessons learned during development and application of the TIPPtool. A non-trivial model of a real life communication system serves as a case study to illustrate benefits and limitations
    • ā€¦
    corecore