3 research outputs found

    Sublinear Time Numerical Linear Algebra for Structured Matrices

    Full text link
    We show how to solve a number of problems in numerical linear algebra, such as least squares regression, β„“p\ell_p-regression for any pβ‰₯1p \geq 1, low rank approximation, and kernel regression, in time T(A) \poly(\log(nd)), where for a given input matrix A∈RnΓ—dA \in \mathbb{R}^{n \times d}, T(A)T(A) is the time needed to compute Aβ‹…yA\cdot y for an arbitrary vector y∈Rdy \in \mathbb{R}^d. Since T(A) \leq O(\nnz(A)), where \nnz(A) denotes the number of non-zero entries of AA, the time is no worse, up to polylogarithmic factors, as all of the recent advances for such problems that run in input-sparsity time. However, for many applications, T(A)T(A) can be much smaller than \nnz(A), yielding significantly sublinear time algorithms. For example, in the overconstrained (1+Ο΅)(1+\epsilon)-approximate polynomial interpolation problem, AA is a Vandermonde matrix and T(A)=O(nlog⁑n)T(A) = O(n \log n); in this case our running time is n \cdot \poly(\log n) + \poly(d/\epsilon) and we recover the results of \cite{avron2013sketching} as a special case. For overconstrained autoregression, which is a common problem arising in dynamical systems, T(A)=O(nlog⁑n)T(A) = O(n \log n), and we immediately obtain n \cdot \poly(\log n) + \poly(d/\epsilon) time. For kernel autoregression, we significantly improve the running time of prior algorithms for general kernels. For the important case of autoregression with the polynomial kernel and arbitrary target vector b∈Rnb\in\mathbb{R}^n, we obtain even faster algorithms. Our algorithms show that, perhaps surprisingly, most of these optimization problems do not require much more time than that of a polylogarithmic number of matrix-vector multiplications

    Quantum-Inspired Algorithms from Randomized Numerical Linear Algebra

    Full text link
    We create classical (non-quantum) dynamic data structures supporting queries for recommender systems and least-squares regression that are comparable to their quantum analogues. De-quantizing such algorithms has received a flurry of attention in recent years; we obtain sharper bounds for these problems. More significantly, we achieve these improvements by arguing that the previous quantum-inspired algorithms for these problems are doing leverage or ridge-leverage score sampling in disguise; these are powerful and standard techniques in randomized numerical linear algebra. With this recognition, we are able to employ the large body of work in numerical linear algebra to obtain algorithms for these problems that are simpler or faster (or both) than existing approaches.Comment: Adding new numerical experiment

    Sublinear Time Numerical Linear Algebra for Structured Matrices

    No full text
    corecore