963 research outputs found

    Computing Least Fixed Points of Probabilistic Systems of Polynomials

    Get PDF
    We study systems of equations of the form X1 = f1(X1, ..., Xn), ..., Xn = fn(X1, ..., Xn), where each fi is a polynomial with nonnegative coefficients that add up to 1. The least nonnegative solution, say mu, of such equation systems is central to problems from various areas, like physics, biology, computational linguistics and probabilistic program verification. We give a simple and strongly polynomial algorithm to decide whether mu=(1, ..., 1) holds. Furthermore, we present an algorithm that computes reliable sequences of lower and upper bounds on mu, converging linearly to mu. Our algorithm has these features despite using inexact arithmetic for efficiency. We report on experiments that show the performance of our algorithms.Comment: Published in the Proceedings of the 27th International Symposium on Theoretical Aspects of Computer Science (STACS). Technical Report is also available via arxiv.or

    Computational linear algebra over finite fields

    Get PDF
    We present here algorithms for efficient computation of linear algebra problems over finite fields

    Bit-Vector Model Counting using Statistical Estimation

    Full text link
    Approximate model counting for bit-vector SMT formulas (generalizing \#SAT) has many applications such as probabilistic inference and quantitative information-flow security, but it is computationally difficult. Adding random parity constraints (XOR streamlining) and then checking satisfiability is an effective approximation technique, but it requires a prior hypothesis about the model count to produce useful results. We propose an approach inspired by statistical estimation to continually refine a probabilistic estimate of the model count for a formula, so that each XOR-streamlined query yields as much information as possible. We implement this approach, with an approximate probability model, as a wrapper around an off-the-shelf SMT solver or SAT solver. Experimental results show that the implementation is faster than the most similar previous approaches which used simpler refinement strategies. The technique also lets us model count formulas over floating-point constraints, which we demonstrate with an application to a vulnerability in differential privacy mechanisms

    Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions

    Get PDF
    Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed---either explicitly or implicitly---to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis

    Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions

    Get PDF
    Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast to O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data

    Probabilistic Error Analysis for Inner Products

    Full text link
    Probabilistic models are proposed for bounding the forward error in the numerically computed inner product (dot product, scalar product) between of two real nn-vectors. We derive probabilistic perturbation bounds, as well as probabilistic roundoff error bounds for the sequential accumulation of the inner product. These bounds are non-asymptotic, explicit, and make minimal assumptions on perturbations and roundoffs. The perturbations are represented as independent, bounded, zero-mean random variables, and the probabilistic perturbation bound is based on Azuma's inequality. The roundoffs are also represented as bounded, zero-mean random variables. The first probabilistic bound assumes that the roundoffs are independent, while the second one does not. For the latter, we construct a Martingale that mirrors the sequential order of computations. Numerical experiments confirm that our bounds are more informative, often by several orders of magnitude, than traditional deterministic bounds -- even for small vector dimensions~nn and very stringent success probabilities. In particular the probabilistic roundoff error bounds are functions of n\sqrt{n} rather than~nn, thus giving a quantitative confirmation of Wilkinson's intuition. The paper concludes with a critical assessment of the probabilistic approach
    corecore