3 research outputs found
Sublinear Time Numerical Linear Algebra for Structured Matrices
We show how to solve a number of problems in numerical linear algebra, such
as least squares regression, -regression for any , low rank
approximation, and kernel regression, in time T(A) \poly(\log(nd)), where for
a given input matrix , is the time needed
to compute for an arbitrary vector . Since T(A)
\leq O(\nnz(A)), where \nnz(A) denotes the number of non-zero entries of
, the time is no worse, up to polylogarithmic factors, as all of the recent
advances for such problems that run in input-sparsity time. However, for many
applications, can be much smaller than \nnz(A), yielding significantly
sublinear time algorithms. For example, in the overconstrained
-approximate polynomial interpolation problem, is a
Vandermonde matrix and ; in this case our running time is
n \cdot \poly(\log n) + \poly(d/\epsilon) and we recover the results of
\cite{avron2013sketching} as a special case. For overconstrained
autoregression, which is a common problem arising in dynamical systems, , and we immediately obtain n \cdot \poly(\log n) +
\poly(d/\epsilon) time. For kernel autoregression, we significantly improve
the running time of prior algorithms for general kernels. For the important
case of autoregression with the polynomial kernel and arbitrary target vector
, we obtain even faster algorithms. Our algorithms show that,
perhaps surprisingly, most of these optimization problems do not require much
more time than that of a polylogarithmic number of matrix-vector
multiplications
Quantum-Inspired Algorithms from Randomized Numerical Linear Algebra
We create classical (non-quantum) dynamic data structures supporting queries
for recommender systems and least-squares regression that are comparable to
their quantum analogues. De-quantizing such algorithms has received a flurry of
attention in recent years; we obtain sharper bounds for these problems. More
significantly, we achieve these improvements by arguing that the previous
quantum-inspired algorithms for these problems are doing leverage or
ridge-leverage score sampling in disguise; these are powerful and standard
techniques in randomized numerical linear algebra. With this recognition, we
are able to employ the large body of work in numerical linear algebra to obtain
algorithms for these problems that are simpler or faster (or both) than
existing approaches.Comment: Adding new numerical experiment