4,420 research outputs found

    Imprecise Markov Models for Scalable and Robust Performance Evaluation of Flexi-Grid Spectrum Allocation Policies

    Get PDF
    The possibility of flexibly assigning spectrum resources with channels of different sizes greatly improves the spectral efficiency of optical networks, but can also lead to unwanted spectrum fragmentation.We study this problem in a scenario where traffic demands are categorised in two types (low or high bit-rate) by assessing the performance of three allocation policies. Our first contribution consists of exact Markov chain models for these allocation policies, which allow us to numerically compute the relevant performance measures. However, these exact models do not scale to large systems, in the sense that the computations required to determine the blocking probabilities---which measure the performance of the allocation policies---become intractable. In order to address this, we first extend an approximate reduced-state Markov chain model that is available in the literature to the three considered allocation policies. These reduced-state Markov chain models allow us to tractably compute approximations of the blocking probabilities, but the accuracy of these approximations cannot be easily verified. Our main contribution then is the introduction of reduced-state imprecise Markov chain models that allow us to derive guaranteed lower and upper bounds on blocking probabilities, for the three allocation policies separately or for all possible allocation policies simultaneously.Comment: 16 pages, 7 figures, 3 table

    A Markov Chain Model Checker

    Get PDF
    Markov chains are widely used in the context of performance and reliability evaluation of systems of various nature. Model checking of such chains with respect to a given (branching) temporal logic formula has been proposed for both the discrete [17,6] and the continuous time setting [4,8]. In this paper, we describe a prototype model checker for discrete and continuous-time Markov chains, the Erlangen Twente Markov Chain Checker (EāŠ¢MC2(E \vdash MC^2), where properties are expressed in appropriate extensions of CTL. We illustrate the general bene ts of this approach and discuss the structure of the tool. Furthermore we report on first successful applications of the tool to non-trivial examples, highlighting lessons learned during development and application of (EāŠ¢MC2(E \vdash MC^2)

    Reduced Complexity Filtering with Stochastic Dominance Bounds: A Convex Optimization Approach

    Full text link
    This paper uses stochastic dominance principles to construct upper and lower sample path bounds for Hidden Markov Model (HMM) filters. Given a HMM, by using convex optimization methods for nuclear norm minimization with copositive constraints, we construct low rank stochastic marices so that the optimal filters using these matrices provably lower and upper bound (with respect to a partially ordered set) the true filtered distribution at each time instant. Since these matrices are low rank (say R), the computational cost of evaluating the filtering bounds is O(XR) instead of O(X2). A Monte-Carlo importance sampling filter is presented that exploits these upper and lower bounds to estimate the optimal posterior. Finally, using the Dobrushin coefficient, explicit bounds are given on the variational norm between the true posterior and the upper and lower bounds

    A tool for model-checking Markov chains

    Get PDF
    Markov chains are widely used in the context of the performance and reliability modeling of various systems. Model checking of such chains with respect to a given (branching) temporal logic formula has been proposed for both discrete [34, 10] and continuous time settings [7, 12]. In this paper, we describe a prototype model checker for discrete and continuous-time Markov chains, the Erlangen-Twente Markov Chain Checker EƎMC2, where properties are expressed in appropriate extensions of CTL. We illustrate the general benefits of this approach and discuss the structure of the tool. Furthermore, we report on successful applications of the tool to some examples, highlighting lessons learned during the development and application of EƎMC2

    Quantitative Approximation of the Probability Distribution of a Markov Process by Formal Abstractions

    Full text link
    The goal of this work is to formally abstract a Markov process evolving in discrete time over a general state space as a finite-state Markov chain, with the objective of precisely approximating its state probability distribution in time, which allows for its approximate, faster computation by that of the Markov chain. The approach is based on formal abstractions and employs an arbitrary finite partition of the state space of the Markov process, and the computation of average transition probabilities between partition sets. The abstraction technique is formal, in that it comes with guarantees on the introduced approximation that depend on the diameters of the partitions: as such, they can be tuned at will. Further in the case of Markov processes with unbounded state spaces, a procedure for precisely truncating the state space within a compact set is provided, together with an error bound that depends on the asymptotic properties of the transition kernel of the original process. The overall abstraction algorithm, which practically hinges on piecewise constant approximations of the density functions of the Markov process, is extended to higher-order function approximations: these can lead to improved error bounds and associated lower computational requirements. The approach is practically tested to compute probabilistic invariance of the Markov process under study, and is compared to a known alternative approach from the literature.Comment: 29 pages, Journal of Logical Methods in Computer Scienc

    Transient Reward Approximation for Continuous-Time Markov Chains

    Full text link
    We are interested in the analysis of very large continuous-time Markov chains (CTMCs) with many distinct rates. Such models arise naturally in the context of reliability analysis, e.g., of computer network performability analysis, of power grids, of computer virus vulnerability, and in the study of crowd dynamics. We use abstraction techniques together with novel algorithms for the computation of bounds on the expected final and accumulated rewards in continuous-time Markov decision processes (CTMDPs). These ingredients are combined in a partly symbolic and partly explicit (symblicit) analysis approach. In particular, we circumvent the use of multi-terminal decision diagrams, because the latter do not work well if facing a large number of different rates. We demonstrate the practical applicability and efficiency of the approach on two case studies.Comment: Accepted for publication in IEEE Transactions on Reliabilit

    Estimation of Markov Chain via Rank-Constrained Likelihood

    Full text link
    This paper studies the estimation of low-rank Markov chains from empirical trajectories. We propose a non-convex estimator based on rank-constrained likelihood maximization. Statistical upper bounds are provided for the Kullback-Leiber divergence and the ā„“2\ell_2 risk between the estimator and the true transition matrix. The estimator reveals a compressed state space of the Markov chain. We also develop a novel DC (difference of convex function) programming algorithm to tackle the rank-constrained non-smooth optimization problem. Convergence results are established. Experiments show that the proposed estimator achieves better empirical performance than other popular approaches.Comment: Accepted at ICML 201

    Curvature and Concentration of Hamiltonian Monte Carlo in High Dimensions

    Full text link
    In this article, we analyze Hamiltonian Monte Carlo (HMC) by placing it in the setting of Riemannian geometry using the Jacobi metric, so that each step corresponds to a geodesic on a suitable Riemannian manifold. We then combine the notion of curvature of a Markov chain due to Joulin and Ollivier with the classical sectional curvature from Riemannian geometry to derive error bounds for HMC in important cases, where we have positive curvature. These cases include several classical distributions such as multivariate Gaussians, and also distributions arising in the study of Bayesian image registration. The theoretical development suggests the sectional curvature as a new diagnostic tool for convergence for certain Markov chains.Comment: Comments welcom
    • ā€¦
    corecore