8 research outputs found

    Explicit Model Checking of Very Large MDP using Partitioning and Secondary Storage

    Full text link
    The applicability of model checking is hindered by the state space explosion problem in combination with limited amounts of main memory. To extend its reach, the large available capacities of secondary storage such as hard disks can be exploited. Due to the specific performance characteristics of secondary storage technologies, specialised algorithms are required. In this paper, we present a technique to use secondary storage for probabilistic model checking of Markov decision processes. It combines state space exploration based on partitioning with a block-iterative variant of value iteration over the same partitions for the analysis of probabilistic reachability and expected-reward properties. A sparse matrix-like representation is used to store partitions on secondary storage in a compact format. All file accesses are sequential, and compression can be used without affecting runtime. The technique has been implemented within the Modest Toolset. We evaluate its performance on several benchmark models of up to 3.5 billion states. In the analysis of time-bounded properties on real-time models, our method neutralises the state space explosion induced by the time bound in its entirety.Comment: The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-24953-7_1

    On compact solution vectors in Kronecker-based Markovian analysis

    Get PDF
    State based analysis of stochastic models for performance and dependability often requires the computation of the stationary distribution of a multidimensional continuous-time Markov chain (CTMC). The infinitesimal generator underlying a multidimensional CTMC with a large reachable state space can be represented compactly in the form of a block matrix in which each nonzero block is expressed as a sum of Kronecker products of smaller matrices. However, solution vectors used in the analysis of such Kronecker-based Markovian representations require memory proportional to the size of the reachable state space. This implies that memory allocated to solution vectors becomes a bottleneck as the size of the reachable state space increases. Here, it is shown that the hierarchical Tucker decomposition (HTD) can be used with adaptive truncation strategies to store the solution vectors during Kronecker-based Markovian analysis compactly and still carry out the basic operations including vector–matrix multiplication in Kronecker form within Power, Jacobi, and Generalized Minimal Residual methods. Numerical experiments on multidimensional problems of varying sizes indicate that larger memory savings are obtained with the HTD approach as the number of dimensions increases. © 2017 Elsevier B.V

    Exploring the influence of big data on city transport operations: a Markovian approach

    Get PDF
    © 2017, © Emerald Publishing Limited.Purpose: The purpose of this paper is to advance knowledge of the transformative potential of big data on city-based transport models. The central question guiding this paper is: how could big data transform smart city transport operations? In answering this question the authors present initial results from a Markov study. However the authors also suggest caution in the transformation potential of big data and highlight the risks of city and organizational adoption. A theoretical framework is presented together with an associated scenario which guides the development of a Markov model. Design/methodology/approach: A model with several scenarios is developed to explore a theoretical framework focussed on matching the transport demands (of people and freight mobility) with city transport service provision using big data. This model was designed to illustrate how sharing transport load (and capacity) in a smart city can improve efficiencies in meeting demand for city services. Findings: This modelling study is an initial preliminary stage of the investigation in how big data could be used to redefine and enable new operational models. The study provides new understanding about load sharing and optimization in a smart city context. Basically the authors demonstrate how big data could be used to improve transport efficiency and lower externalities in a smart city. Further how improvement could take place by having a car free city environment, autonomous vehicles and shared resource capacity among providers. Research limitations/implications: The research relied on a Markov model and the numerical solution of its steady state probabilities vector to illustrate the transformation of transport operations management (OM) in the future city context. More in depth analysis and more discrete modelling are clearly needed to assist in the implementation of big data initiatives and facilitate new innovations in OM. The work complements and extends that of Setia and Patel (2013), who theoretically link together information system design to operation absorptive capacity capabilities. Practical implications: The study implies that transport operations would actually need to be re-organized so as to deal with lowering CO2 footprint. The logistic aspects could be seen as a move from individual firms optimizing their own transportation supply to a shared collaborative load and resourced system. Such ideas are radical changes driven by, or leading to more decentralized rather than having centralized transport solutions (Caplice, 2013). Social implications: The growth of cities and urban areas in the twenty-first century has put more pressure on resources and conditions of urban life. This paper is an initial first step in building theory, knowledge and critical understanding of the social implications being posed by the growth in cities and the role that big data and smart cities could play in developing a resilient and sustainable transport city system. Originality/value: Despite the importance of OM to big data implementation, for both practitioners and researchers, we have yet to see a systematic analysis of its implementation and its absorptive capacity contribution to building capabilities, at either city system or organizational levels. As such the Markov model makes a preliminary contribution to the literature integrating big data capabilities with OM capabilities and the resulting improvements in system absorptive capacity

    A Symbolic Out-of-Core Solution Method for Markov Models

    No full text
    Despite considerable eort, the state-space explosion problem remains an issue in the analysis of Markov models. Given structure, symbolic representations can result in very compact encoding of the models. However, a major obstacle for symbolic methods is the need to store the probability vector(s) explicitly in main memory. In this paper, we present a novel algorithm which relaxes these memory limitations by storing the probability vector on disk. The algorithm has been implemented using an MTBDD-based data structure to store the matrix and an array to store the vector. We report on experimental results for two benchmark models, a Kanban manufacturing system and a exible manufacturing system, with models as large as 133 million states

    A symbolic out-of-core solution method for Markov models

    No full text
    SIGLEAvailable from British Library Document Supply Centre-DSC:8092.7029(02-8) / BLDSC - British Library Document Supply CentreGBUnited Kingdo
    corecore