3,418 research outputs found
Revisiting Co-Occurring Directions: Sharper Analysis and Efficient Algorithm for Sparse Matrices
We study the streaming model for approximate matrix multiplication (AMM). We
are interested in the scenario that the algorithm can only take one pass over
the data with limited memory. The state-of-the-art deterministic sketching
algorithm for streaming AMM is the co-occurring directions (COD), which has
much smaller approximation errors than randomized algorithms and outperforms
other deterministic sketching methods empirically. In this paper, we provide a
tighter error bound for COD whose leading term considers the potential
approximate low-rank structure and the correlation of input matrices. We prove
COD is space optimal with respect to our improved error bound. We also propose
a variant of COD for sparse matrices with theoretical guarantees. The
experiments on real-world sparse datasets show that the proposed algorithm is
more efficient than baseline methods
Revisiting the Nystrom Method for Improved Large-Scale Machine Learning
We reconsider randomized algorithms for the low-rank approximation of
symmetric positive semi-definite (SPSD) matrices such as Laplacian and kernel
matrices that arise in data analysis and machine learning applications. Our
main results consist of an empirical evaluation of the performance quality and
running time of sampling and projection methods on a diverse suite of SPSD
matrices. Our results highlight complementary aspects of sampling versus
projection methods; they characterize the effects of common data preprocessing
steps on the performance of these algorithms; and they point to important
differences between uniform sampling and nonuniform sampling methods based on
leverage scores. In addition, our empirical results illustrate that existing
theory is so weak that it does not provide even a qualitative guide to
practice. Thus, we complement our empirical results with a suite of worst-case
theoretical bounds for both random sampling and random projection methods.
These bounds are qualitatively superior to existing bounds---e.g. improved
additive-error bounds for spectral and Frobenius norm error and relative-error
bounds for trace norm error---and they point to future directions to make these
algorithms useful in even larger-scale machine learning applications.Comment: 60 pages, 15 color figures; updated proof of Frobenius norm bounds,
added comparison to projection-based low-rank approximations, and an analysis
of the power method applied to SPSD sketche
- …