21 research outputs found

    Quantum-inspired low-rank stochastic regression with logarithmic dependence on the dimension

    Get PDF
    We construct an efficient classical analogue of the quantum matrix inversion algorithm (HHL) for low-rank matrices. Inspired by recent work of Tang, assuming length-square sampling access to input data, we implement the pseudoinverse of a low-rank matrix and sample from the solution to the problem Ax=bAx=b using fast sampling techniques. We implement the pseudo-inverse by finding an approximate singular value decomposition of AA via subsampling, then inverting the singular values. In principle, the approach can also be used to apply any desired "smooth" function to the singular values. Since many quantum algorithms can be expressed as a singular value transformation problem, our result suggests that more low-rank quantum algorithms can be effectively "dequantised" into classical length-square sampling algorithms.Comment: 10 page

    A CS guide to the quantum singular value transformation

    Full text link
    We present a simplified exposition of some pieces of [Gily\'en, Su, Low, and Wiebe, STOC'19, arXiv:1806.01838], which introduced a quantum singular value transformation (QSVT) framework for applying polynomial functions to block-encoded matrices. The QSVT framework has garnered substantial recent interest from the quantum algorithms community, as it was demonstrated by [GSLW19] to encapsulate many existing algorithms naturally phrased as an application of a matrix function. First, we posit that the lifting of quantum singular processing (QSP) to QSVT is better viewed not through Jordan's lemma (as was suggested by [GSLW19]) but as an application of the cosine-sine decomposition, which can be thought of as a more explicit and stronger version of Jordan's lemma. Second, we demonstrate that the constructions of bounded polynomial approximations given in [GSLW19], which use a variety of ad hoc approaches drawing from Fourier analysis, Chebyshev series, and Taylor series, can be unified under the framework of truncation of Chebyshev series, and indeed, can in large part be matched via a bounded variant of a standard meta-theorem from [Trefethen, 2013]. We hope this work finds use to the community as a companion guide for understanding and applying the powerful framework of [GSLW19].Comment: 32 pages; v2 QSVT proofs more self-contained, additional result separating bounded and unbounded polynomial approximation

    An Improved Classical Singular Value Transformation for Quantum Machine Learning

    Full text link
    We study quantum speedups in quantum machine learning (QML) by analyzing the quantum singular value transformation (QSVT) framework. QSVT, introduced by [GSLW, STOC'19, arXiv:1806.01838], unifies all major types of quantum speedup; in particular, a wide variety of QML proposals are applications of QSVT on low-rank classical data. We challenge these proposals by providing a classical algorithm that matches the performance of QSVT in this regime up to a small polynomial overhead. We show that, given a matrix ACm×nA \in \mathbb{C}^{m\times n}, a vector bCnb \in \mathbb{C}^{n}, a bounded degree-dd polynomial pp, and linear-time pre-processing, we can output a description of a vector vv such that vp(A)bεb\|v - p(A) b\| \leq \varepsilon\|b\| in O~(d11AF4/(ε2A4))\widetilde{\mathcal{O}}(d^{11} \|A\|_{\mathrm{F}}^4 / (\varepsilon^2 \|A\|^4 )) time. This improves upon the best known classical algorithm [CGLLTW, STOC'20, arXiv:1910.06151], which requires O~(d22AF6/(ε6A6))\widetilde{\mathcal{O}}(d^{22} \|A\|_{\mathrm{F}}^6 /(\varepsilon^6 \|A\|^6 ) ) time, and narrows the gap with QSVT, which, after linear-time pre-processing to load input into a quantum-accessible memory, can estimate the magnitude of an entry p(A)bp(A)b to εb\varepsilon\|b\| error in O~(dAF/(εA))\widetilde{\mathcal{O}}(d\|A\|_{\mathrm{F}}/(\varepsilon \|A\|)) time. Our key insight is to combine the Clenshaw recurrence, an iterative method for computing matrix polynomials, with sketching techniques to simulate QSVT classically. We introduce several new classical techniques in this work, including (a) a non-oblivious matrix sketch for approximately preserving bi-linear forms, (b) a new stability analysis for the Clenshaw recurrence, and (c) a new technique to bound arithmetic progressions of the coefficients appearing in the Chebyshev series expansion of bounded functions, each of which may be of independent interest.Comment: 62 pages, v3: fixed bug, runtime exponent now 11 instead of 9; v2: revised abstract to clarify result

    An improved quantum-inspired algorithm for linear regression

    Get PDF
    We give a classical algorithm for linear regression analogous to the quantum matrix inversion algorithm [Harrow, Hassidim, and Lloyd, Physical Review Letters'09] for low-rank matrices [Wossnig et al., Physical Review Letters'18], when the input matrix AA is stored in a data structure applicable for QRAM-based state preparation. Namely, given an ACm×nA \in \mathbb{C}^{m\times n} with minimum singular value σ\sigma and which supports certain efficient 2\ell_2-norm importance sampling queries, along with a bCmb \in \mathbb{C}^m, we can output a description of an xCnx \in \mathbb{C}^n such that xA+bεA+b\|x - A^+b\| \leq \varepsilon\|A^+b\| in O~(AF6A2σ8ε4)\tilde{\mathcal{O}}\Big(\frac{\|A\|_{\mathrm{F}}^6\|A\|^2}{\sigma^8\varepsilon^4}\Big) time, improving on previous "quantum-inspired" algorithms in this line of research by a factor of A14σ14ε2\frac{\|A\|^{14}}{\sigma^{14}\varepsilon^2} [Chia et al., STOC'20]. The algorithm is stochastic gradient descent, and the analysis bears similarities to those of optimization algorithms for regression in the usual setting [Gupta and Sidford, NeurIPS'18]. Unlike earlier works, this is a promising avenue that could lead to feasible implementations of classical regression in a quantum-inspired setting, for comparison against future quantum computers.Comment: 16 pages, bug fixe

    Optimal learning of quantum Hamiltonians from high-temperature Gibbs states

    Full text link
    We study the problem of learning a Hamiltonian HH to precision ε\varepsilon, supposing we are given copies of its Gibbs state ρ=exp(βH)/Tr(exp(βH))\rho=\exp(-\beta H)/\operatorname{Tr}(\exp(-\beta H)) at a known inverse temperature β\beta. Anshu, Arunachalam, Kuwahara, and Soleimanifar (Nature Physics, 2021, arXiv:2004.07266) recently studied the sample complexity (number of copies of ρ\rho needed) of this problem for geometrically local NN-qubit Hamiltonians. In the high-temperature (low β\beta) regime, their algorithm has sample complexity poly(N,1/β,1/ε)(N, 1/\beta,1/\varepsilon) and can be implemented with polynomial, but suboptimal, time complexity. In this paper, we study the same question for a more general class of Hamiltonians. We show how to learn the coefficients of a Hamiltonian to error ε\varepsilon with sample complexity S=O(logN/(βε)2)S = O(\log N/(\beta\varepsilon)^{2}) and time complexity linear in the sample size, O(SN)O(S N). Furthermore, we prove a matching lower bound showing that our algorithm's sample complexity is optimal, and hence our time complexity is also optimal. In the appendix, we show that virtually the same algorithm can be used to learn HH from a real-time evolution unitary eitHe^{-it H} in a small tt regime with similar sample and time complexity.Comment: 59 pages, v2: incorporated reviewer comments, improved exposition of appendi

    Quantum-inspired low-rank stochastic regression with logarithmic dependence on the dimension

    Get PDF
    We construct an efficient classical analogue of the quantum matrix inversion algorithm [HHL09] for low-rank matrices. Inspired by recent work of Tang [Tan18a], assuming length-square sampling access to input data, we implement the pseudo-inverse of a low-rank matrix and sample from the solution to the problem Ax = b using fast sampling techniques. We implement th

    Quantum-Inspired Algorithms for Solving Low-Rank Linear Equation Systems with Logarithmic Dependence on the Dimension

    Get PDF
    We present two efficient classical analogues of the quantum matrix inversion algorithm [16] for low-rank matrices. Inspired by recent work of Tang [27], assuming length-square sampling access to input data, we implement the pseudoinverse of a low-rank matrix allowing us to sample from the solution to the problem Ax = b using fast sampling techniques. We construct implicit descriptions of the pseudo-inverse by finding approximate singular value decomposition of A via subsampling, then inverting the singular values. In principle, our approaches can also be used to apply any desired “smooth” function to the singular values. Since many quantum algorithms can be expressed as a singular value transformation problem [15], our results indicate that more low-rank quantum algorithms can be effectively “dequantised” into classical length-square sampling algorithms
    corecore