3,797 research outputs found

    A tool for model-checking Markov chains

    Get PDF
    Markov chains are widely used in the context of the performance and reliability modeling of various systems. Model checking of such chains with respect to a given (branching) temporal logic formula has been proposed for both discrete [34, 10] and continuous time settings [7, 12]. In this paper, we describe a prototype model checker for discrete and continuous-time Markov chains, the Erlangen-Twente Markov Chain Checker EÎMC2, where properties are expressed in appropriate extensions of CTL. We illustrate the general benefits of this approach and discuss the structure of the tool. Furthermore, we report on successful applications of the tool to some examples, highlighting lessons learned during the development and application of EÎMC2

    Towards faster numerical solution of Continuous Time Markov Chains stored by symbolic data structures

    Get PDF
    This work considers different aspects of model-based performance- and dependability analysis. This research area analyses systems (e.g. computer-, telecommunication- or production-systems) in order to quantify their performance and reliability. Such an analysis can be carried out already in the planning phase, without a physically existing system. All aspects treated in this work are based on finite state spaces (i.e. the models only have finitely many states) and a representation of the state graphs by Multi-Terminal Binary Decision Diagrams (MTBDDs). Currently, there are many tools that transform high-level model specifications (e.g. process algebra or Petri-Net) to low-level models (e.g. Markov chains). Markov chains can be represented by sparse matrices. For complex models very large state spaces may occur (this phenomenon is called state space explosion in the literature) and accordingly very large matrices representing the state graphs. The problem of building the model from the specification and storing the state graph can be regarded as solved: There are heuristics for compactly storing the state graph by MTBDD or Kronecker data structure and there are efficient algorithms for the model generation and functional analysis. For the quantitative analysis there are still problems due to the size of the underlying state space. This work provides some methods to alleviate the problems in case of MTBDD-based storage of the state graph. It is threefold: 1. For the generation of smaller state graphs in the model generation phase (which usually are easier to solve) a symbolic elimination algorithm is developed. 2. For the calculation of steady-state probabilities of Markov chains a multilevel algorithm is developed which allows for faster solutions. 3. For calculating the most probable paths in a state graph, the mean time to the first failure of a system and related measures, a path-based solver is developed

    The Origins of Computational Mechanics: A Brief Intellectual History and Several Clarifications

    Get PDF
    The principle goal of computational mechanics is to define pattern and structure so that the organization of complex systems can be detected and quantified. Computational mechanics developed from efforts in the 1970s and early 1980s to identify strange attractors as the mechanism driving weak fluid turbulence via the method of reconstructing attractor geometry from measurement time series and in the mid-1980s to estimate equations of motion directly from complex time series. In providing a mathematical and operational definition of structure it addressed weaknesses of these early approaches to discovering patterns in natural systems. Since then, computational mechanics has led to a range of results from theoretical physics and nonlinear mathematics to diverse applications---from closed-form analysis of Markov and non-Markov stochastic processes that are ergodic or nonergodic and their measures of information and intrinsic computation to complex materials and deterministic chaos and intelligence in Maxwellian demons to quantum compression of classical processes and the evolution of computation and language. This brief review clarifies several misunderstandings and addresses concerns recently raised regarding early works in the field (1980s). We show that misguided evaluations of the contributions of computational mechanics are groundless and stem from a lack of familiarity with its basic goals and from a failure to consider its historical context. For all practical purposes, its modern methods and results largely supersede the early works. This not only renders recent criticism moot and shows the solid ground on which computational mechanics stands but, most importantly, shows the significant progress achieved over three decades and points to the many intriguing and outstanding challenges in understanding the computational nature of complex dynamic systems.Comment: 11 pages, 123 citations; http://csc.ucdavis.edu/~cmg/compmech/pubs/cmr.ht

    Should We Learn Probabilistic Models for Model Checking? A New Approach and An Empirical Study

    Get PDF
    Many automated system analysis techniques (e.g., model checking, model-based testing) rely on first obtaining a model of the system under analysis. System modeling is often done manually, which is often considered as a hindrance to adopt model-based system analysis and development techniques. To overcome this problem, researchers have proposed to automatically "learn" models based on sample system executions and shown that the learned models can be useful sometimes. There are however many questions to be answered. For instance, how much shall we generalize from the observed samples and how fast would learning converge? Or, would the analysis result based on the learned model be more accurate than the estimation we could have obtained by sampling many system executions within the same amount of time? In this work, we investigate existing algorithms for learning probabilistic models for model checking, propose an evolution-based approach for better controlling the degree of generalization and conduct an empirical study in order to answer the questions. One of our findings is that the effectiveness of learning may sometimes be limited.Comment: 15 pages, plus 2 reference pages, accepted by FASE 2017 in ETAP

    New Trends in Network Anomaly Detection

    Get PDF

    Sensitivity of Markov chains for wireless protocols

    Get PDF
    Network communication protocols such as the IEEE 802.11 wireless protocol are currently best modelled as Markov chains. In these situations we have some protocol parameters α\alpha, and a transition matrix P(α)P(\alpha) from which we can compute the steady state (equilibrium) distribution z(α)z(\alpha) and hence final desired quantities q(α)q(\alpha), which might be for example the throughput of the protocol. Typically the chain will have thousands of states, and a particular example of interest is the Bianchi chain defined later. Generally we want to optimise qq, perhaps subject to some constraints that also depend on the Markov chain. To do this efficiently we need the gradient of qq with respect to α\alpha, and therefore need the gradient of zz and other properties of the chain with respect to α\alpha. The matrix formulas available for this involve the so-called fundamental matrix, but are there approximate gradients available which are faster and still sufficiently accurate? In some cases BT would like to do the whole calculation in computer algebra, and get a series expansion of the equilibrium zz with respect to a parameter in PP. In addition to the steady state zz, the same questions arise for the mixing time and the mean hitting times. Two qualitative features that were brought to the Study Group’s attention were: * the transition matrix PP is large, but sparse. * the systems of linear equations to be solved are generally singular and need some additional normalisation condition, such as is provided by using the fundamental matrix. We also note a third highly important property regarding applications of numerical linear algebra: * the transition matrix PP is asymmetric. A realistic dimension for the matrix PP in the Bianchi model described below is 8064×8064, but on average there are only a few nonzero entries per column. Merely storing such a large matrix in dense form would require nearly 0.5GBytes using 64-bit floating point numbers, and computing its LU factorisation takes around 80 seconds on a modern microprocessor. It is thus highly desirable to employ specialised algorithms for sparse matrices. These algorithms are generally divided between those only applicable to symmetric matrices, the most prominent being the conjugate-gradient (CG) algorithm for solving linear equations, and those applicable to general matrices. A similar division is present in the literature on numerical eigenvalue problems

    Model Checking Markov Chains with Actions and State Labels

    Get PDF
    In the past, logics of several kinds have been proposed for reasoning about discrete- or continuous-time Markov chains. Most of these logics rely on either state labels (atomic propositions) or on transition labels (actions). However, in several applications it is useful to reason about both state-properties and action-sequences. For this purpose, we introduce the logic asCSL which provides powerful means to characterize execution paths of Markov chains with actions and state labels. asCSL can be regarded as an extension of the purely state-based logic asCSL (continuous stochastic logic). \ud In asCSL, path properties are characterized by regular expressions over actions and state-formulas. Thus, the truth value of path-formulas does not only depend on the available actions in a given time interval, but also on the validity of certain state formulas in intermediate states.\ud We compare the expressive power of CSL and asCSL and show that even the state-based fragment of asCSL is strictly more expressive than CSL if time intervals starting at zero are employed. Using an automaton-based technique, an asCSL formula and a Markov chain with actions and state labels are combined into a product Markov chain. For time intervals starting at zero we establish a reduction of the model checking problem for asCSL to CSL model checking on this product Markov chain. The usefulness of our approach is illustrated by through an elaborate model of a scalable cellular communication system for which several properties are formalized by means of asCSL-formulas, and checked using the new procedure
    corecore