21 research outputs found
Quantum-inspired low-rank stochastic regression with logarithmic dependence on the dimension
We construct an efficient classical analogue of the quantum matrix inversion
algorithm (HHL) for low-rank matrices. Inspired by recent work of Tang,
assuming length-square sampling access to input data, we implement the
pseudoinverse of a low-rank matrix and sample from the solution to the problem
using fast sampling techniques. We implement the pseudo-inverse by
finding an approximate singular value decomposition of via subsampling,
then inverting the singular values. In principle, the approach can also be used
to apply any desired "smooth" function to the singular values. Since many
quantum algorithms can be expressed as a singular value transformation problem,
our result suggests that more low-rank quantum algorithms can be effectively
"dequantised" into classical length-square sampling algorithms.Comment: 10 page
A CS guide to the quantum singular value transformation
We present a simplified exposition of some pieces of [Gily\'en, Su, Low, and
Wiebe, STOC'19, arXiv:1806.01838], which introduced a quantum singular value
transformation (QSVT) framework for applying polynomial functions to
block-encoded matrices. The QSVT framework has garnered substantial recent
interest from the quantum algorithms community, as it was demonstrated by
[GSLW19] to encapsulate many existing algorithms naturally phrased as an
application of a matrix function. First, we posit that the lifting of quantum
singular processing (QSP) to QSVT is better viewed not through Jordan's lemma
(as was suggested by [GSLW19]) but as an application of the cosine-sine
decomposition, which can be thought of as a more explicit and stronger version
of Jordan's lemma. Second, we demonstrate that the constructions of bounded
polynomial approximations given in [GSLW19], which use a variety of ad hoc
approaches drawing from Fourier analysis, Chebyshev series, and Taylor series,
can be unified under the framework of truncation of Chebyshev series, and
indeed, can in large part be matched via a bounded variant of a standard
meta-theorem from [Trefethen, 2013]. We hope this work finds use to the
community as a companion guide for understanding and applying the powerful
framework of [GSLW19].Comment: 32 pages; v2 QSVT proofs more self-contained, additional result
separating bounded and unbounded polynomial approximation
An Improved Classical Singular Value Transformation for Quantum Machine Learning
We study quantum speedups in quantum machine learning (QML) by analyzing the
quantum singular value transformation (QSVT) framework. QSVT, introduced by
[GSLW, STOC'19, arXiv:1806.01838], unifies all major types of quantum speedup;
in particular, a wide variety of QML proposals are applications of QSVT on
low-rank classical data. We challenge these proposals by providing a classical
algorithm that matches the performance of QSVT in this regime up to a small
polynomial overhead.
We show that, given a matrix , a vector , a bounded degree- polynomial , and linear-time
pre-processing, we can output a description of a vector such that in time. This improves upon the
best known classical algorithm [CGLLTW, STOC'20, arXiv:1910.06151], which
requires time, and narrows the gap with QSVT, which, after linear-time
pre-processing to load input into a quantum-accessible memory, can estimate the
magnitude of an entry to error in
time.
Our key insight is to combine the Clenshaw recurrence, an iterative method
for computing matrix polynomials, with sketching techniques to simulate QSVT
classically. We introduce several new classical techniques in this work,
including (a) a non-oblivious matrix sketch for approximately preserving
bi-linear forms, (b) a new stability analysis for the Clenshaw recurrence, and
(c) a new technique to bound arithmetic progressions of the coefficients
appearing in the Chebyshev series expansion of bounded functions, each of which
may be of independent interest.Comment: 62 pages, v3: fixed bug, runtime exponent now 11 instead of 9; v2:
revised abstract to clarify result
An improved quantum-inspired algorithm for linear regression
We give a classical algorithm for linear regression analogous to the quantum
matrix inversion algorithm [Harrow, Hassidim, and Lloyd, Physical Review
Letters'09] for low-rank matrices [Wossnig et al., Physical Review Letters'18],
when the input matrix is stored in a data structure applicable for
QRAM-based state preparation.
Namely, given an with minimum singular value
and which supports certain efficient -norm importance sampling
queries, along with a , we can output a description of an
such that in
time, improving on previous "quantum-inspired" algorithms in this line of
research by a factor of [Chia et
al., STOC'20]. The algorithm is stochastic gradient descent, and the analysis
bears similarities to those of optimization algorithms for regression in the
usual setting [Gupta and Sidford, NeurIPS'18]. Unlike earlier works, this is a
promising avenue that could lead to feasible implementations of classical
regression in a quantum-inspired setting, for comparison against future quantum
computers.Comment: 16 pages, bug fixe
Optimal learning of quantum Hamiltonians from high-temperature Gibbs states
We study the problem of learning a Hamiltonian to precision
, supposing we are given copies of its Gibbs state
at a known inverse
temperature . Anshu, Arunachalam, Kuwahara, and Soleimanifar (Nature
Physics, 2021, arXiv:2004.07266) recently studied the sample complexity (number
of copies of needed) of this problem for geometrically local -qubit
Hamiltonians. In the high-temperature (low ) regime, their algorithm has
sample complexity poly and can be implemented with
polynomial, but suboptimal, time complexity.
In this paper, we study the same question for a more general class of
Hamiltonians. We show how to learn the coefficients of a Hamiltonian to error
with sample complexity and
time complexity linear in the sample size, . Furthermore, we prove a
matching lower bound showing that our algorithm's sample complexity is optimal,
and hence our time complexity is also optimal.
In the appendix, we show that virtually the same algorithm can be used to
learn from a real-time evolution unitary in a small regime
with similar sample and time complexity.Comment: 59 pages, v2: incorporated reviewer comments, improved exposition of
appendi
Quantum-inspired low-rank stochastic regression with logarithmic dependence on the dimension
We construct an efficient classical analogue of the quantum matrix inversion algorithm [HHL09] for low-rank matrices. Inspired by recent work of Tang [Tan18a], assuming length-square sampling access to input data, we implement the pseudo-inverse of a low-rank matrix and sample from the solution to the problem Ax = b using fast sampling techniques. We implement th
Quantum-Inspired Algorithms for Solving Low-Rank Linear Equation Systems with Logarithmic Dependence on the Dimension
We present two efficient classical analogues of the quantum matrix inversion algorithm [16] for low-rank matrices. Inspired by recent work of Tang [27], assuming length-square sampling access to input data, we implement the pseudoinverse of a low-rank matrix allowing us to sample from the solution to the problem Ax = b using fast sampling techniques. We construct implicit descriptions of the pseudo-inverse by finding approximate singular value decomposition of A via subsampling, then inverting the singular values. In principle, our approaches can also be used to apply any desired “smooth” function to the singular values. Since many quantum algorithms can be expressed as a singular value transformation problem [15], our results indicate that more low-rank quantum algorithms can be effectively “dequantised” into classical length-square sampling algorithms