7 research outputs found

    Extending Markov Automata with State and Action Rewards

    Get PDF
    This presentation introduces the Markov Reward Automaton (MRA), an extension of the Markov automaton that allows the modelling of systems incorporating rewards in addition to nondeterminism, discrete probabilistic choice and continuous stochastic timing. Our models support both rewards that are acquired instantaneously when taking certain transitions (action rewards) and rewards that are based on the duration that certain conditions hold (state rewards). In addition to introducing the MRA model, we extend the process-algebraic language MAPA to easily specify MRAs. Also, we provide algorithms for computing the expected reward until reaching one of a certain set of goal states, as well as the long-run average reward. We extended the MAMA tool chain (consisting of the tools SCOOP and IMCA) to implement the reward extension of MAPA and these algorithms

    A Hierarchy of Scheduler Classes for Stochastic Automata

    Get PDF
    Stochastic automata are a formal compositional model for concurrent stochastic timed systems, with general distributions and non-deterministic choices. Measures of interest are defined over schedulers that resolve the nondeterminism. In this paper we investigate the power of various theoretically and practically motivated classes of schedulers, considering the classic complete-information view and a restriction to non-prophetic schedulers. We prove a hierarchy of scheduler classes w.r.t. unbounded probabilistic reachability. We find that, unlike Markovian formalisms, stochastic automata distinguish most classes even in this basic setting. Verification and strategy synthesis methods thus face a tradeoff between powerful and efficient classes. Using lightweight scheduler sampling, we explore this tradeoff and demonstrate the concept of a useful approximative verification technique for stochastic automata

    Modelling and analysis of Markov reward automata

    Get PDF
    Costs and rewards are important ingredients for many types of systems, modelling critical aspects like energy consumption, task completion, repair costs, and memory usage. This paper introduces Markov reward automata, an extension of Markov automata that allows the modelling of systems incorporating rewards (or costs) in addition to nondeterminism, discrete probabilistic choice and continuous stochastic timing. Rewards come in two flavours: action rewards, acquired instantaneously when taking a transition; and state rewards, acquired while residing in a state. We present algorithms to optimise three reward functions: the expected cumulative reward until a goal is reached, the expected cumulative reward until a certain time bound, and the long-run average reward. We have implemented these algorithms in the SCOOP/IMCA tool chain and show their feasibility via several case studies

    Probabilistic Bisimulation: Naturally on Distributions

    Full text link
    In contrast to the usual understanding of probabilistic systems as stochastic processes, recently these systems have also been regarded as transformers of probabilities. In this paper, we give a natural definition of strong bisimulation for probabilistic systems corresponding to this view that treats probability distributions as first-class citizens. Our definition applies in the same way to discrete systems as well as to systems with uncountable state and action spaces. Several examples demonstrate that our definition refines the understanding of behavioural equivalences of probabilistic systems. In particular, it solves a long-standing open problem concerning the representation of memoryless continuous time by memory-full continuous time. Finally, we give algorithms for computing this bisimulation not only for finite but also for classes of uncountably infinite systems

    Modelling and analysis of Markov reward automata (extended version)

    Get PDF
    Costs and rewards are important ingredients for cyberphysical systems, modelling critical aspects like energy consumption, task completion, repair costs, and memory usage. This paper introduces Markov reward automata, an extension of Markov automata that allows the modelling of systems incorporating rewards (or costs) in addition to nondeterminism, discrete probabilistic choice and continuous stochastic timing. Rewards come in two flavours: action rewards, acquired instantaneously when taking a transition; and state rewards, acquired while residing in a state. We present algorithms to optimise three reward functions: the expected accumulative reward until a goal is reached; the expected accumulative reward until a certain time bound; and the long-run average reward. We have implemented these algorithms in the SCOOP/IMCA tool chain and show their feasibility via several case studies
    corecore