5 research outputs found
Approximate Multiplication of Sparse Matrices with Limited Space
Approximate matrix multiplication with limited space has received
ever-increasing attention due to the emergence of large-scale applications.
Recently, based on a popular matrix sketching algorithm---frequent directions,
previous work has introduced co-occuring directions (COD) to reduce the
approximation error for this problem. Although it enjoys the space complexity
of for two input matrices and
where is the sketch size, its time
complexity is , which is still very high for
large input matrices. In this paper, we propose to reduce the time complexity
by exploiting the sparsity of the input matrices. The key idea is to employ an
approximate singular value decomposition (SVD) method which can utilize the
sparsity, to reduce the number of QR decompositions required by COD. In this
way, we develop sparse co-occuring directions, which reduces the time
complexity to \widetilde{O}\left((\nnz(X)+\nnz(Y))\ell+n\ell^2\right) in
expectation while keeps the same space complexity as , where
\nnz(X) denotes the number of non-zero entries in . Theoretical analysis
reveals that the approximation error of our algorithm is almost the same as
that of COD. Furthermore, we empirically verify the efficiency and
effectiveness of our algorithm
Revisiting Co-Occurring Directions: Sharper Analysis and Efficient Algorithm for Sparse Matrices
We study the streaming model for approximate matrix multiplication (AMM). We
are interested in the scenario that the algorithm can only take one pass over
the data with limited memory. The state-of-the-art deterministic sketching
algorithm for streaming AMM is the co-occurring directions (COD), which has
much smaller approximation errors than randomized algorithms and outperforms
other deterministic sketching methods empirically. In this paper, we provide a
tighter error bound for COD whose leading term considers the potential
approximate low-rank structure and the correlation of input matrices. We prove
COD is space optimal with respect to our improved error bound. We also propose
a variant of COD for sparse matrices with theoretical guarantees. The
experiments on real-world sparse datasets show that the proposed algorithm is
more efficient than baseline methods
Doctor of Philosophy
dissertationMatrices are essential data representations for many large-scale problems in data analytics; for example, in text analysis under the bag-of-words model, a large corpus of documents are often represented as a matrix. Many data analytic tasks rely on obtaining a summary (a.k.a sketch) of the data matrix. Using this summary in place of the original data matrix saves on space usage and run-time of machine learning algorithms. Therefore, sketching a matrix is often a necessary first step in data reduction, and sometimes has direct relationships to core techniques including PCA, LDA, and clustering. In this dissertation, we study the problem of matrix sketching over data streams. We first describe a deterministic matrix sketching algorithm called FrequentDirections. The algorithm is presented an arbitrary input matrix Aβ Rn&Γ d one row at a time. It performs O(dl) operations per row and maintains a sketch matrix B β RlΓ d such that for any k< l, ||ATA - BTB \|| 2 < ||A - Ak||F2 / (l-k) and ||A - ΟBk(A)||F2 β€ (1 + k/l-k)||A-Ak||F2 . Here, Ak stands for the minimizer of ||A - Ak||F over all rank k matrices (similarly Bk), and ΟBk (A) is the rank k matrix resulting from projecting A on the row span of Bk. We show both of these bounds are the best possible for the space allowed, the sketch is mergeable, and hence trivially parallelizable. We propose several variants of FrequentDirections that improve its error-size tradeoff, and nearly matches the simple heuristic Iterative SVD method in practice. We then describe SparseFrequentDirections for sketching sparse matrices. It resembles the original algorithm in many ways including having the same optimal asymptotic guarantees with respect to the space-accuracy tradeoff in the streaming setting, but unlike FrequentDirections which runs in O(ndl) time, SparseFrequentDirections runs in Γ(nnz(A)l + nl2) time. We then extend our methods to distributed streaming model, where there are m distributed sites each observing a distinct stream of data, and which has a communication channel with a coordinator. The goal is to track an Ξ΅-approximation (for Ξ΅ β (0,1)) to the norm of the matrix along any direction. We present novel algorithms to address this problem. All our methods satisfy an additive error bound that for any unit vector x, | ||A x||2 - ||B x ||2 | β€ |Ξ΅ ||A||F2 holds