39 research outputs found

    The quantum version of the shifted power method and its application in quadratic binary optimization

    Full text link
    In this paper, we present a direct quantum adaptation of the classical shifted power method. The method is very similar to the iterative phase estimation algorithm; however, it does not require any initial estimate of an eigenvector and as in the classical case its convergence and the required number of iterations are directly related to the eigengap. If the amount of the gap is in the order of 1/poly(n)1/poly(n), then the algorithm can converge to the dominant eigenvalue in O(poly(n))O(poly(n)) time. The method can be potentially used for solving any eigenvalue related problem and finding minimum/maximum of a quantum state in lieu of Grover's search algorithm. In addition, if the solution space of an optimization problem with nn parameters is encoded as the eigenspace of an 2n2^n dimensional unitary operator in O(poly(n))O(poly(n)) time and the eigengap is not too small, then the solution for such a problem can be found in O(poly(n))O(poly(n)). As an example, using the quantum gates, we show how to generate the solution space of the quadratic unconstrained binary optimization as the eigenvectors of a diagonal unitary matrix and find the solution for the problem

    Improvements in Quantum SDP-Solving with Applications

    Get PDF
    Following the first paper on quantum algorithms for SDP-solving by Brandão and Svore [Brandão and Svore, 2017] in 2016, rapid developments have been made on quantum optimization algorithms. In this paper we improve and generalize all prior quantum algorithms for SDP-solving and give a simpler and unified framework. We take a new perspective on quantum SDP-solvers and introduce several new techniques. One of these is the quantum operator input model, which generalizes the different input models used in previous work, and essentially any other reasonable input model. This new model assumes that the input matrices are embedded in a block of a unitary operator. In this model we give a O~((sqrt{m}+sqrt{n}gamma)alpha gamma^4) algorithm, where n is the size of the matrices, m is the number of constraints, gamma is the reciprocal of the scale-invariant relative precision parameter, and alpha is a normalization factor of the input matrices. In particular for the standard sparse-matrix access, the above result gives a quantum algorithm where alpha=s. We also improve on recent results of Brandão et al. [Fernando G. S. L. Brandão et al., 2018], who consider the special case w

    Variational Quantum Singular Value Decomposition

    Get PDF
    Singular value decomposition is central to many problems in engineering and scientific fields. Several quantum algorithms have been proposed to determine the singular values and their associated singular vectors of a given matrix. Although these algorithms are promising, the required quantum subroutines and resources are too costly on near-term quantum devices. In this work, we propose a variational quantum algorithm for singular value decomposition (VQSVD). By exploiting the variational principles for singular values and the Ky Fan Theorem, we design a novel loss function such that two quantum neural networks (or parameterized quantum circuits) could be trained to learn the singular vectors and output the corresponding singular values. Furthermore, we conduct numerical simulations of VQSVD for random matrices as well as its applications in image compression of handwritten digits. Finally, we discuss the applications of our algorithm in recommendation systems and polar decomposition. Our work explores new avenues for quantum information processing beyond the conventional protocols that only works for Hermitian data, and reveals the capability of matrix decomposition on near-term quantum devices.Comment: 23 pages, v3 accepted by Quantu

    Fast quantum subroutines for the simplex method

    Full text link
    We propose quantum subroutines for the simplex method that avoid classical computation of the basis inverse. For an m×nm \times n constraint matrix with at most dcd_c nonzero elements per column, at most dd nonzero elements per column or row of the basis, basis condition number κ\kappa, and optimality tolerance ϵ\epsilon, we show that pricing can be performed in O~(1ϵκdn(dcn+dm))\tilde{O}(\frac{1}{\epsilon}\kappa d \sqrt{n}(d_c n + d m)) time, where the O~\tilde{O} notation hides polylogarithmic factors. If the ratio n/mn/m is larger than a certain threshold, the running time of the quantum subroutine can be reduced to O~(1ϵκd1.5dcnm)\tilde{O}(\frac{1}{\epsilon}\kappa d^{1.5} \sqrt{d_c} n \sqrt{m}). The steepest edge pivoting rule also admits a quantum implementation, increasing the running time by a factor κ2\kappa^2. Classically, pricing requires O(dc0.7m1.9+m2+o(1)+dcn)O(d_c^{0.7} m^{1.9} + m^{2 + o(1)} + d_c n) time in the worst case using the fastest known algorithm for sparse matrix multiplication, and O(dc0.7m1.9+m2+o(1)+m2n)O(d_c^{0.7} m^{1.9} + m^{2 + o(1)} + m^2n) with steepest edge. Furthermore, we show that the ratio test can be performed in O~(tδκd2m1.5)\tilde{O}(\frac{t}{\delta} \kappa d^2 m^{1.5}) time, where t,δt, \delta determine a feasibility tolerance; classically, this requires O(m2)O(m^2) time in the worst case. For well-conditioned sparse problems the quantum subroutines scale better in mm and nn, and may therefore have a worst-case asymptotic advantage. An important feature of our paper is that this asymptotic speedup does not depend on the data being available in some "quantum form": the input of our quantum subroutines is the natural classical description of the problem, and the output is the index of the variables that should leave or enter the basis.Comment: Added discussion on condition number and infeasibilitie

    A Quantum Interior Point Method for LPs and SDPs

    Full text link
    We present a quantum interior point method with worst case running time O~(n2.5ξ2μκ3log(1/ϵ))\widetilde{O}(\frac{n^{2.5}}{\xi^{2}} \mu \kappa^3 \log (1/\epsilon)) for SDPs and O~(n1.5ξ2μκ3log(1/ϵ))\widetilde{O}(\frac{n^{1.5}}{\xi^{2}} \mu \kappa^3 \log (1/\epsilon)) for LPs, where the output of our algorithm is a pair of matrices (S,Y)(S,Y) that are ϵ\epsilon-optimal ξ\xi-approximate SDP solutions. The factor μ\mu is at most 2n\sqrt{2}n for SDPs and 2n\sqrt{2n} for LP's, and κ\kappa is an upper bound on the condition number of the intermediate solution matrices. For the case where the intermediate matrices for the interior point method are well conditioned, our method provides a polynomial speedup over the best known classical SDP solvers and interior point based LP solvers, which have a worst case running time of O(n6)O(n^{6}) and O(n3.5)O(n^{3.5}) respectively. Our results build upon recently developed techniques for quantum linear algebra and pave the way for the development of quantum algorithms for a variety of applications in optimization and machine learning.Comment: 32 page
    corecore