4,636 research outputs found

    On compact solution vectors in Kronecker-based Markovian analysis

    Get PDF
    State based analysis of stochastic models for performance and dependability often requires the computation of the stationary distribution of a multidimensional continuous-time Markov chain (CTMC). The infinitesimal generator underlying a multidimensional CTMC with a large reachable state space can be represented compactly in the form of a block matrix in which each nonzero block is expressed as a sum of Kronecker products of smaller matrices. However, solution vectors used in the analysis of such Kronecker-based Markovian representations require memory proportional to the size of the reachable state space. This implies that memory allocated to solution vectors becomes a bottleneck as the size of the reachable state space increases. Here, it is shown that the hierarchical Tucker decomposition (HTD) can be used with adaptive truncation strategies to store the solution vectors during Kronecker-based Markovian analysis compactly and still carry out the basic operations including vector–matrix multiplication in Kronecker form within Power, Jacobi, and Generalized Minimal Residual methods. Numerical experiments on multidimensional problems of varying sizes indicate that larger memory savings are obtained with the HTD approach as the number of dimensions increases. © 2017 Elsevier B.V

    On vector-kronecker product multiplication with rectangular factors

    Get PDF
    The infinitesimal generator matrix underlying a multidimensional Markov chain can be represented compactly by using sums of Kronecker products of small rectangular matrices. For such compact representations, analysis methods based on vector-Kronecker product multiplication need to be employed. When the factors in the Kronecker product terms are relatively dense, vector- Kronecker product multiplication can be performed efficiently by the shuffle algorithm. When the factors are relatively sparse, it may be more efficient to obtain nonzero elements of the generator matrix in Kronecker form on the fly and multiply them with corresponding elements of the vector. This work proposes a modification to the shuffle algorithm that multiplies relevant elements of the vector with submatrices of factors in which zero rows and columns are omitted. This approach avoids unnecessary floating-point operations that evaluate to zero during the course of the multiplication and possibly reduces the amount of memory used. Numerical experiments on a large number of models indicate that in many cases the modified shuffle algorithm performs a smaller number of floating-point operations than the shuffle algorithm and the algorithm that generates nonzeros on the fly, sometimes with a minimum number of floating-point operations and as little of memory possible. © 2015 Society for Industrial and Applied Mathematics

    Entrograms and coarse graining of dynamics on complex networks

    Full text link
    Using an information theoretic point of view, we investigate how a dynamics acting on a network can be coarse grained through the use of graph partitions. Specifically, we are interested in how aggregating the state space of a Markov process according to a partition impacts on the thus obtained lower-dimensional dynamics. We highlight that for a dynamics on a particular graph there may be multiple coarse grained descriptions that capture different, incomparable features of the original process. For instance, a coarse graining induced by one partition may be commensurate with a time-scale separation in the dynamics, while another coarse graining may correspond to a different lower-dimensional dynamics that preserves the Markov property of the original process. Taking inspiration from the literature of Computational Mechanics, we find that a convenient tool to summarise and visualise such dynamical properties of a coarse grained model (partition) is the entrogram. The entrogram gathers certain information-theoretic measures, which quantify how information flows across time steps. These information theoretic quantities include the entropy rate, as well as a measure for the memory contained in the process, i.e., how well the dynamics can be approximated by a first order Markov process. We use the entrogram to investigate how specific macro-scale connection patterns in the state-space transition graph of the original dynamics result in desirable properties of coarse grained descriptions. We thereby provide a fresh perspective on the interplay between structure and dynamics in networks, and the process of partitioning from an information theoretic perspective. We focus on networks that may be approximated by both a core-periphery or a clustered organization, and highlight that each of these coarse grained descriptions can capture different aspects of a Markov process acting on the network.Comment: 17 pages, 6 figue

    Exact Analysis of TTL Cache Networks: The Case of Caching Policies driven by Stopping Times

    Full text link
    TTL caching models have recently regained significant research interest, largely due to their ability to fit popular caching policies such as LRU. This paper advances the state-of-the-art analysis of TTL-based cache networks by developing two exact methods with orthogonal generality and computational complexity. The first method generalizes existing results for line networks under renewal requests to the broad class of caching policies whereby evictions are driven by stopping times. The obtained results are further generalized, using the second method, to feedforward networks with Markov arrival processes (MAP) requests. MAPs are particularly suitable for non-line networks because they are closed not only under superposition and splitting, as known, but also under input-output caching operations as proven herein for phase-type TTL distributions. The crucial benefit of the two closure properties is that they jointly enable the first exact analysis of feedforward networks of TTL caches in great generality

    Tensor Networks for Big Data Analytics and Large-Scale Optimization Problems

    Full text link
    In this paper we review basic and emerging models and associated algorithms for large-scale tensor networks, especially Tensor Train (TT) decompositions using novel mathematical and graphical representations. We discus the concept of tensorization (i.e., creating very high-order tensors from lower-order original data) and super compression of data achieved via quantized tensor train (QTT) networks. The purpose of a tensorization and quantization is to achieve, via low-rank tensor approximations "super" compression, and meaningful, compact representation of structured data. The main objective of this paper is to show how tensor networks can be used to solve a wide class of big data optimization problems (that are far from tractable by classical numerical methods) by applying tensorization and performing all operations using relatively small size matrices and tensors and applying iteratively optimized and approximative tensor contractions. Keywords: Tensor networks, tensor train (TT) decompositions, matrix product states (MPS), matrix product operators (MPO), basic tensor operations, tensorization, distributed representation od data optimization problems for very large-scale problems: generalized eigenvalue decomposition (GEVD), PCA/SVD, canonical correlation analysis (CCA).Comment: arXiv admin note: text overlap with arXiv:1403.204
    • …
    corecore