500 research outputs found

    Givens rotations for QR decomposition, SVD and PCA over database joins

    Get PDF
    This article introduces FiGaRo, an algorithm for computing the upper-triangular matrix in the QR decomposition of the matrix defined by the natural join over relational data. FiGaRo ’s main novelty is that it pushes the QR decomposition past the join. This leads to several desirable properties. For acyclic joins, it takes time linear in the database size and independent of the join size. Its execution is equivalent to the application of a sequence of Givens rotations proportional to the join size. Its number of rounding errors relative to the classical QR decomposition algorithms is on par with the database size relative to the join output size. The QR decomposition lies at the core of many linear algebra computations including the singular value decomposition (SVD) and the principal component analysis (PCA). We show how FiGaRo can be used to compute the orthogonal matrix in the QR decomposition, the SVD and the PCA of the join output without the need to materialize the join output. A suite of experiments validate that FiGaRo can outperform both in runtime performance and numerical accuracy the LAPACK library Intel MKL by a factor proportional to the gap between the sizes of the join output and input

    Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions

    Get PDF
    Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast to O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data

    Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions

    Get PDF
    Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed---either explicitly or implicitly---to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis

    The Acceleration of Polynomial Methods for Blind Image Deconvolution Using Graphical Processing Units (GPUs)

    Get PDF
    Image processing has become an integral part of many areas of study. Unfortunately, the process of capturing images can often result in undesirable blurring and noise, and thus can make processing the resulting images problematic. Methods are therefore required that attempt to remove blurring. The main body of work in this field is in Bayesian methods for image deblurring, with many algorithms aimed at solving this problem relying on the Fourier transform. The Fourier transform results in the amplification of noise in the image, which can lead to many of the same problems as blurring. Winkler presented a method of blind image deconvolution (BID) without the Fourier transform, which treated the rows and columns of the blurred image as the coefficients of univariate polynomials. By treating the rows and columns of the image in this way, the problem of computing the blurring function becomes a problem of computing the greatest common divisor (GCD) of these polynomials. The computation of the GCD of two polynomials is ill posed, as any noise in the polynomials causes them to be coprime. Thus an approximate GCD (AGCD) must be computed instead. The computation of an AGCD is a computationally expensive process, resulting in the BID algorithm being expensive. The research presented in this thesis investigates the fundamental mathematical processes underpinning such an algorithm, and presents multiple methods through which this algorithm can be accelerated using a GPU. This acceleration results in an implementation that is 30 times faster than a CPU parallel approach. The process of accelerating the BID algorithm in this way required a first of its kind GPU accelerated algorithm for the computation of an AGCD, with multiple novel techniques utilised to achieve this acceleration
    corecore