36,528 research outputs found

    Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions

    Get PDF
    Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressedā€”either explicitly or implicitlyā€”to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m Ɨ n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast to O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data

    Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions

    Get PDF
    Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed---either explicitly or implicitly---to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis

    Moments of spectral functions: Monte Carlo evaluation and verification

    Full text link
    The subject of the present study is the Monte Carlo path-integral evaluation of the moments of spectral functions. Such moments can be computed by formal differentiation of certain estimating functionals that are infinitely-differentiable against time whenever the potential function is arbitrarily smooth. Here, I demonstrate that the numerical differentiation of the estimating functionals can be more successfully implemented by means of pseudospectral methods (e.g., exact differentiation of a Chebyshev polynomial interpolant), which utilize information from the entire interval (āˆ’Ī²ā„/2,Ī²ā„/2)(-\beta \hbar / 2, \beta \hbar/2). The algorithmic detail that leads to robust numerical approximations is the fact that the path integral action and not the actual estimating functional are interpolated. Although the resulting approximation to the estimating functional is non-linear, the derivatives can be computed from it in a fast and stable way by contour integration in the complex plane, with the help of the Cauchy integral formula (e.g., by Lyness' method). An interesting aspect of the present development is that Hamburger's conditions for a finite sequence of numbers to be a moment sequence provide the necessary and sufficient criteria for the computed data to be compatible with the existence of an inversion algorithm. Finally, the issue of appearance of the sign problem in the computation of moments, albeit in a milder form than for other quantities, is addressed.Comment: 13 pages, 2 figure

    Fast Covariance Estimation for High-dimensional Functional Data

    Get PDF
    For smoothing covariance functions, we propose two fast algorithms that scale linearly with the number of observations per function. Most available methods and software cannot smooth covariance matrices of dimension JƗJJ \times J with J>500J>500; the recently introduced sandwich smoother is an exception, but it is not adapted to smooth covariance matrices of large dimensions such as Jā‰„10,000J \ge 10,000. Covariance matrices of order J=10,000J=10,000, and even J=100,000J=100,000, are becoming increasingly common, e.g., in 2- and 3-dimensional medical imaging and high-density wearable sensor data. We introduce two new algorithms that can handle very large covariance matrices: 1) FACE: a fast implementation of the sandwich smoother and 2) SVDS: a two-step procedure that first applies singular value decomposition to the data matrix and then smoothes the eigenvectors. Compared to existing techniques, these new algorithms are at least an order of magnitude faster in high dimensions and drastically reduce memory requirements. The new algorithms provide instantaneous (few seconds) smoothing for matrices of dimension J=10,000J=10,000 and very fast (<< 10 minutes) smoothing for J=100,000J=100,000. Although SVDS is simpler than FACE, we provide ready to use, scalable R software for FACE. When incorporated into R package {\it refund}, FACE improves the speed of penalized functional regression by an order of magnitude, even for data of normal size (J<500J <500). We recommend that FACE be used in practice for the analysis of noisy and high-dimensional functional data.Comment: 35 pages, 4 figure
    • ā€¦
    corecore