4 research outputs found

    Superfast Refinement of Low Rank Approximation of a Matrix

    Full text link
    Low rank approximation (LRA) of a matrix is a hot subject of modern computations. In application to Big Data mining and analysis the input matrices are usually so immense that one must apply superfast algorithms, which only access a tiny fraction of the input entries and involve much fewer memory cells and flops than an input matrix has entries. Recently we devised and analyzed some superfast LRA algorithms; in this paper we extend a classical algorithm of iterative refinement of the solution of linear systems of equations to superfast refinement of a crude but reasonably close LRA; we also list some heuristic recipes for superfast a posteriori estimation of the errors of LRA and support our superfast refinement algorithm with some superfast heuristic recipes for a posteriori error estimation of LRA and with superfast back and forth transition between any LRA of a matrix and its SVD. Our algorithm of iterative refinement of LRA is the first attempt of this kind and should motivate further effort in that direction, but already our initial tests are in good accordance with our formal study.Comment: 12.5 pages,, 1 table and 1 figur

    CUR Low Rank Approximation of a Matrix at Sublinear Cost

    Full text link
    Low rank approximation of a matrix (hereafter LRA) is a highly important area of Numerical Linear and Multilinear Algebra and Data Mining and Analysis. One can operate with LRA at sublinear cost, that is, by using much fewer memory cells and flops than an input matrix has entries, but no sublinear cost algorithm can compute accurate LRA of the worst case input matrices or even of the matrices of small families in our Appendix. Nevertheless we prove that Cross-Approximation celebrated algorithms and even more primitive sublinear cost algorithms output quite accurate LRA for a large subclass of the class of all matrices that admit LRA and in a sense for most of such matrices. Moreover, we accentuate the power of sublinear cost LRA by means of multiplicative pre-processing of an input matrix, and this also reveals a link between C-A algorithms and Randomized and Sketching LRA algorithms. Our tests are in good accordance with our formal study.Comment: 29 pages, 5 figures, 5 tables. arXiv admin note: text overlap with arXiv:1906.0492

    Novel Fast Algorithms For Low Rank Matrix Approximation

    Full text link
    Recent advances in matrix approximation have seen an emphasis on randomization techniques in which the goal was to create a sketch of an input matrix. This sketch, a random submatrix of an input matrix, having much fewer rows or columns, still preserves its relevant features. In one of such techniques random projections approximate the range of an input matrix. Dimension reduction transforms are obtained by means of multiplication of an input matrix by one or more matrices which can be orthogonal, random, and allowing fast multiplication by a vector. The Subsampled Randomized Hadamard Transform (SRHT) is the most popular among transforms. An m x n matrix can be multiplied by an n x l SRHT matrix in O(mn log l) arithmetic operations where typically l \u3c\u3c min(m, n). This dissertation introduces an alternative, which we call the Subsampled Randomized Approximate Hadamard Transform (SRAHT), and for which complexity of multiplication by an input matrix decreases to O( (2n + l log n) m ) operations. We also prove that our sublinear cost variants of a popular subspace sampling algorithm output accurate low rank approximation (hereafter LRA) of a large class of input. Finally, we introduce new sublinear algorithms for the CUR LRA matrix factorization which consists of a column subset C and a row subset R of an input matrix and a connector matrix U. We prove that these CUR algorithms provide close LRA with a high probability on a random input matrix admitting LRA
    corecore