20 research outputs found

    Efficient Modelling and Generation of Markov Automata (extended version)

    Get PDF
    This paper introduces a framework for the efficient modelling and generation of Markov automata. It consists of (1) the data-rich process-algebraic language MAPA, allowing concise modelling of systems with nondeterminism, probability and Markovian timing; (2) a restricted form of the language, the MLPPE, enabling easy state space generation and parallel composition; and (3) several syntactic reduction techniques on the MLPPE format, for generating equivalent but smaller models. Technically, the framework relies on an encoding of MAPA into the existing prCRL language for probabilistic automata. First, we identify a class of transformations on prCRL that can be lifted to the Markovian realm using our encoding. Then, we employ this result to reuse prCRL's linearisation procedure to transform any MAPA specification to an equivalent MLPPE, and to lift three prCRL reduction techniques to MAPA. Additionally, we define two novel reduction techniques for MLPPEs. All our techniques treat data as well as Markovian and interactive behaviour in a fully symbolic manner, working on specifications instead of models and thus reducing state spaces prior to their construction. The framework has been implemented in our tool SCOOP, and a case study on polling systems and mutual exclusion protocols shows its practical applicability

    Confluence reduction for Markov automata

    Get PDF
    Markov automata are a novel formalism for specifying systems exhibiting nondeterminism, probabilistic choices and Markovian rates. Recently, the process algebra MAPA was introduced to efficiently model such systems. As always, the state space explosion threatens the analysability of the models generated by such specifications. We therefore introduce confluence reduction for Markov automata, a powerful reduction technique to keep these models small. We define the notion of confluence directly on Markov automata, and discuss how to syntactically detect confluence on the MAPA language as well. That way, Markov automata generated by MAPA specifications can be reduced on-the-fly while preserving divergence-sensitive branching bisimulation. Three case studies demonstrate the significance of our approach, with reductions in analysis time up to an order of magnitude

    Extending Markov Automata with State and Action Rewards

    Get PDF
    This presentation introduces the Markov Reward Automaton (MRA), an extension of the Markov automaton that allows the modelling of systems incorporating rewards in addition to nondeterminism, discrete probabilistic choice and continuous stochastic timing. Our models support both rewards that are acquired instantaneously when taking certain transitions (action rewards) and rewards that are based on the duration that certain conditions hold (state rewards). In addition to introducing the MRA model, we extend the process-algebraic language MAPA to easily specify MRAs. Also, we provide algorithms for computing the expected reward until reaching one of a certain set of goal states, as well as the long-run average reward. We extended the MAMA tool chain (consisting of the tools SCOOP and IMCA) to implement the reward extension of MAPA and these algorithms

    A Definition Scheme for Quantitative Bisimulation

    Get PDF
    FuTS, state-to-function transition systems are generalizations of labeled transition systems and of familiar notions of quantitative semantical models as continuous-time Markov chains, interactive Markov chains, and Markov automata. A general scheme for the definition of a notion of strong bisimulation associated with a FuTS is proposed. It is shown that this notion of bisimulation for a FuTS coincides with the coalgebraic notion of behavioral equivalence associated to the functor on Set given by the type of the FuTS. For a series of concrete quantitative semantical models the notion of bisimulation as reported in the literature is proven to coincide with the notion of quantitative bisimulation obtained from the scheme. The comparison includes models with orthogonal behaviour, like interactive Markov chains, and with multiple levels of behavior, like Markov automata. As a consequence of the general result relating FuTS bisimulation and behavioral equivalence we obtain, in a systematic way, a coalgebraic underpinning of all quantitative bisimulations discussed.Comment: In Proceedings QAPL 2015, arXiv:1509.0816

    MeGARA: Menu-based Game Abstraction and Abstraction Refinement of Markov Automata

    Full text link
    Markov automata combine continuous time, probabilistic transitions, and nondeterminism in a single model. They represent an important and powerful way to model a wide range of complex real-life systems. However, such models tend to be large and difficult to handle, making abstraction and abstraction refinement necessary. In this paper we present an abstraction and abstraction refinement technique for Markov automata, based on the game-based and menu-based abstraction of probabilistic automata. First experiments show that a significant reduction in size is possible using abstraction.Comment: In Proceedings QAPL 2014, arXiv:1406.156

    Bisimulation of Labelled State-to-Function Transition Systems Coalgebraically

    Get PDF
    Labeled state-to-function transition systems, FuTS for short, are characterized by transitions which relate states to functions of states over general semirings, equipped with a rich set of higher-order operators. As such, FuTS constitute a convenient modeling instrument to deal with process languages and their quantitative extensions in particular. In this paper, the notion of bisimulation induced by a FuTS is addressed from a coalgebraic point of view. A correspondence result is established stating that FuTS-bisimilarity coincides with behavioural equivalence of the associated functor. As generic examples, the equivalences underlying substantial fragments of major examples of quantitative process algebras are related to the bisimilarity of specific FuTS. The examples range from a stochastic process language, PEPA, to a language for Interactive Markov Chains, IML, a (discrete) timed process language, TPC, and a language for Markov Automata, MAL. The equivalences underlying these languages are related to the bisimilarity of their specific FuTS. By the correspondence result coalgebraic justification of the equivalences of these calculi is obtained. The specific selection of languages, besides covering a large variety of process interaction models and modelling choices involving quantities, allows us to show different classes of FuTS, namely so-called simple FuTS, combined FuTS, nested FuTS, and general FuTS

    Modelling, reduction and analysis of Markov automata (extended version)

    Get PDF
    Markov automata (MA) constitute an expressive continuous-time compositional modelling formalism. They appear as semantic backbones for engineering frameworks including dynamic fault trees, Generalised Stochastic Petri Nets, and AADL. Their expressive power has thus far precluded them from effective analysis by probabilistic (and statistical) model checkers, stochastic game solvers, or analysis tools for Petri net-like formalisms. This paper presents the foundations and underlying algorithms for efficient MA modelling, reduction using static analysis, and most importantly, quantitative analysis. We also discuss implementation pragmatics of supporting tools and present several case studies demonstrating feasibility and usability of MA in practice

    Analysis of Timed and Long-Run Objectives for Markov Automata

    Get PDF
    Markov automata (MAs) extend labelled transition systems with random delays and probabilistic branching. Action-labelled transitions are instantaneous and yield a distribution over states, whereas timed transitions impose a random delay governed by an exponential distribution. MAs are thus a nondeterministic variation of continuous-time Markov chains. MAs are compositional and are used to provide a semantics for engineering frameworks such as (dynamic) fault trees, (generalised) stochastic Petri nets, and the Architecture Analysis & Design Language (AADL). This paper considers the quantitative analysis of MAs. We consider three objectives: expected time, long-run average, and timed (interval) reachability. Expected time objectives focus on determining the minimal (or maximal) expected time to reach a set of states. Long-run objectives determine the fraction of time to be in a set of states when considering an infinite time horizon. Timed reachability objectives are about computing the probability to reach a set of states within a given time interval. This paper presents the foundations and details of the algorithms and their correctness proofs. We report on several case studies conducted using a prototypical tool implementation of the algorithms, driven by the MAPA modelling language for efficiently generating MAs.Comment: arXiv admin note: substantial text overlap with arXiv:1305.705

    Modelling and analysis of Markov reward automata

    Get PDF
    Costs and rewards are important ingredients for many types of systems, modelling critical aspects like energy consumption, task completion, repair costs, and memory usage. This paper introduces Markov reward automata, an extension of Markov automata that allows the modelling of systems incorporating rewards (or costs) in addition to nondeterminism, discrete probabilistic choice and continuous stochastic timing. Rewards come in two flavours: action rewards, acquired instantaneously when taking a transition; and state rewards, acquired while residing in a state. We present algorithms to optimise three reward functions: the expected cumulative reward until a goal is reached, the expected cumulative reward until a certain time bound, and the long-run average reward. We have implemented these algorithms in the SCOOP/IMCA tool chain and show their feasibility via several case studies
    corecore