23 research outputs found
Conditional steady-state bounds for a subset of states in Markov chains
The problem of computing bounds on the conditional steady-state probability vector of a subset of states in finite, ergodic discrete-time Markov chains (DTMCs) is considered. An improved algorithm utilizing the strong stochastic (st-)order is given. On standard benchmarks from the literature and other examples, it is shown that the proposed algorithm performs better than the existing one in the strong stochastic sense. Furthermore, in certain cases the conditional steady-state probability vector of the subset under consideration can be obtained exactly. Copyright 2006 ACM
Computing Quantiles in Markov Reward Models
Probabilistic model checking mainly concentrates on techniques for reasoning
about the probabilities of certain path properties or expected values of
certain random variables. For the quantitative system analysis, however, there
is also another type of interesting performance measure, namely quantiles. A
typical quantile query takes as input a lower probability bound p and a
reachability property. The task is then to compute the minimal reward bound r
such that with probability at least p the target set will be reached before the
accumulated reward exceeds r. Quantiles are well-known from mathematical
statistics, but to the best of our knowledge they have not been addressed by
the model checking community so far.
In this paper, we study the complexity of quantile queries for until
properties in discrete-time finite-state Markov decision processes with
non-negative rewards on states. We show that qualitative quantile queries can
be evaluated in polynomial time and present an exponential algorithm for the
evaluation of quantitative quantile queries. For the special case of Markov
chains, we show that quantitative quantile queries can be evaluated in time
polynomial in the size of the chain and the maximum reward.Comment: 17 pages, 1 figure; typo in example correcte
Componentwise bounds for nearly completely decomposable Markov chains using stochastic comparison and reordering
Cataloged from PDF version of article.This paper presents an improved version of a componentwise bounding algorithm for the state probability vector of nearly completely decomposable Markov chains, and on an application it provides the first numerical results with the type of algorithm discussed. The given two-level algorithm uses aggregation and stochastic comparison with the strong stochastic (st) order. In order to improve accuracy, it employs reordering of states and a better componentwise probability bounding algorithm given st upper- and lower-bounding probability vectors. Results in sparse storage show that there are cases in which the given algorithm proves to be useful. © 2004 Elsevier B.V. All rights reserved
Loss rates bounds for IP switches in MPLS networks
International audienc
COMPUTING THE BOUNDS ON THE LOSS RATES J.-M. FOURNEAU
Abstract: We consider an example network where we compute the bounds on cell loss rates. The stochastic bounds for these loss rates using simple arguments lead to models easier to solve. We proved, using stochastic orders, that the loss rates of these easier models are really the bounds of our original model. For ill-balanced configurations these models give good estimates of loss rates
Stochastic Bounds Applied to the End to End QoS in Communication Systems
End to end QoS of communication systems is essential for users but their performance evaluation is a complex issue. The abstraction of such systems are usually given by multidimensional Markov processes whose analysis is very difficult and even intractable, if there is no specific solution form. In this study, we propose an algorithm in order to automatically derive aggregated Markov processes providing upper and lower bounds on performance measures. We applied the algorithm to the analysis of an open tandem queueing network with rejection in order to derive performance measure bounds. Parametric aggregation schemes have been proposed in order to compute bounds on loss probabilities and end to end mean delays. Therefore a tradeoff between the accuracy of the bound and the size of considered Markov chains is possible.ou