1,137 research outputs found

    Numerical Verification of Affine Systems with up to a Billion Dimensions

    Full text link
    Affine systems reachability is the basis of many verification methods. With further computation, methods exist to reason about richer models with inputs, nonlinear differential equations, and hybrid dynamics. As such, the scalability of affine systems verification is a prerequisite to scalable analysis for more complex systems. In this paper, we improve the scalability of affine systems verification, in terms of the number of dimensions (variables) in the system. The reachable states of affine systems can be written in terms of the matrix exponential, and safety checking can be performed at specific time steps with linear programming. Unfortunately, for large systems with many state variables, this direct approach requires an intractable amount of memory while using an intractable amount of computation time. We overcome these challenges by combining several methods that leverage common problem structure. Memory is reduced by exploiting initial states that are not full-dimensional and safety properties (outputs) over a few linear projections of the state variables. Computation time is saved by using numerical simulations to compute only projections of the matrix exponential relevant for the verification problem. Since large systems often have sparse dynamics, we use Krylov-subspace simulation approaches based on the Arnoldi or Lanczos iterations. Our method produces accurate counter-examples when properties are violated and, in the extreme case with sufficient problem structure, can analyze a system with one billion real-valued state variables

    Bayesian Inference of Log Determinants

    Full text link
    The log-determinant of a kernel matrix appears in a variety of machine learning problems, ranging from determinantal point processes and generalized Markov random fields, through to the training of Gaussian processes. Exact calculation of this term is often intractable when the size of the kernel matrix exceeds a few thousand. In the spirit of probabilistic numerics, we reinterpret the problem of computing the log-determinant as a Bayesian inference problem. In particular, we combine prior knowledge in the form of bounds from matrix theory and evidence derived from stochastic trace estimation to obtain probabilistic estimates for the log-determinant and its associated uncertainty within a given computational budget. Beyond its novelty and theoretic appeal, the performance of our proposal is competitive with state-of-the-art approaches to approximating the log-determinant, while also quantifying the uncertainty due to budget-constrained evidence.Comment: 12 pages, 3 figure

    Fast matrix computations for pair-wise and column-wise commute times and Katz scores

    Full text link
    We first explore methods for approximating the commute time and Katz score between a pair of nodes. These methods are based on the approach of matrices, moments, and quadrature developed in the numerical linear algebra community. They rely on the Lanczos process and provide upper and lower bounds on an estimate of the pair-wise scores. We also explore methods to approximate the commute times and Katz scores from a node to all other nodes in the graph. Here, our approach for the commute times is based on a variation of the conjugate gradient algorithm, and it provides an estimate of all the diagonals of the inverse of a matrix. Our technique for the Katz scores is based on exploiting an empirical localization property of the Katz matrix. We adopt algorithms used for personalized PageRank computing to these Katz scores and theoretically show that this approach is convergent. We evaluate these methods on 17 real world graphs ranging in size from 1000 to 1,000,000 nodes. Our results show that our pair-wise commute time method and column-wise Katz algorithm both have attractive theoretical properties and empirical performance.Comment: 35 pages, journal version of http://dx.doi.org/10.1007/978-3-642-18009-5_13 which has been submitted for publication. Please see http://www.cs.purdue.edu/homes/dgleich/publications/2011/codes/fast-katz/ for supplemental code

    Approximating spectral densities of large matrices

    Full text link
    In physics, it is sometimes desirable to compute the so-called \emph{Density Of States} (DOS), also known as the \emph{spectral density}, of a real symmetric matrix AA. The spectral density can be viewed as a probability density distribution that measures the likelihood of finding eigenvalues near some point on the real line. The most straightforward way to obtain this density is to compute all eigenvalues of AA. But this approach is generally costly and wasteful, especially for matrices of large dimension. There exists alternative methods that allow us to estimate the spectral density function at much lower cost. The major computational cost of these methods is in multiplying AA with a number of vectors, which makes them appealing for large-scale problems where products of the matrix AA with arbitrary vectors are relatively inexpensive. This paper defines the problem of estimating the spectral density carefully, and discusses how to measure the accuracy of an approximate spectral density. It then surveys a few known methods for estimating the spectral density, and proposes some new variations of existing methods. All methods are discussed from a numerical linear algebra point of view

    Efficient solution of parabolic equations by Krylov approximation methods

    Get PDF
    Numerical techniques for solving parabolic equations by the method of lines is addressed. The main motivation for the proposed approach is the possibility of exploiting a high degree of parallelism in a simple manner. The basic idea of the method is to approximate the action of the evolution operator on a given state vector by means of a projection process onto a Krylov subspace. Thus, the resulting approximation consists of applying an evolution operator of a very small dimension to a known vector which is, in turn, computed accurately by exploiting well-known rational approximations to the exponential. Because the rational approximation is only applied to a small matrix, the only operations required with the original large matrix are matrix-by-vector multiplications, and as a result the algorithm can easily be parallelized and vectorized. Some relevant approximation and stability issues are discussed. We present some numerical experiments with the method and compare its performance with a few explicit and implicit algorithms
    corecore