25 research outputs found

    A Quantum Interior Point Method for LPs and SDPs

    Full text link
    We present a quantum interior point method with worst case running time O~(n2.5ξ2μκ3log(1/ϵ))\widetilde{O}(\frac{n^{2.5}}{\xi^{2}} \mu \kappa^3 \log (1/\epsilon)) for SDPs and O~(n1.5ξ2μκ3log(1/ϵ))\widetilde{O}(\frac{n^{1.5}}{\xi^{2}} \mu \kappa^3 \log (1/\epsilon)) for LPs, where the output of our algorithm is a pair of matrices (S,Y)(S,Y) that are ϵ\epsilon-optimal ξ\xi-approximate SDP solutions. The factor μ\mu is at most 2n\sqrt{2}n for SDPs and 2n\sqrt{2n} for LP's, and κ\kappa is an upper bound on the condition number of the intermediate solution matrices. For the case where the intermediate matrices for the interior point method are well conditioned, our method provides a polynomial speedup over the best known classical SDP solvers and interior point based LP solvers, which have a worst case running time of O(n6)O(n^{6}) and O(n3.5)O(n^{3.5}) respectively. Our results build upon recently developed techniques for quantum linear algebra and pave the way for the development of quantum algorithms for a variety of applications in optimization and machine learning.Comment: 32 page

    Randomized Row and Column Iterative Methods with a Quantum Computer

    Get PDF
    We consider the quantum implementations of the two classical iterative solvers for a system of linear equations, including the Kaczmarz method which uses a row of coefficient matrix in each iteration step, and the coordinate descent method which utilizes a column instead. These two methods are widely applied in big data science due to their very simple iteration schemes. In this paper we use the block-encoding technique and propose fast quantum implementations for these two approaches, under the assumption that the quantum states of each row or each column can be efficiently prepared. The quantum algorithms achieve exponential speed up at the problem size over the classical versions, meanwhile their complexity is nearly linear at the number of steps

    Quantum Spectral Clustering

    Full text link
    Spectral clustering is a powerful unsupervised machine learning algorithm for clustering data with non convex or nested structures. With roots in graph theory, it uses the spectral properties of the Laplacian matrix to project the data in a low-dimensional space where clustering is more efficient. Despite its success in clustering tasks, spectral clustering suffers in practice from a fast-growing running time of O(n3)O(n^3), where nn is the number of points in the dataset. In this work we propose an end-to-end quantum algorithm performing spectral clustering, extending a number of works in quantum machine learning. The quantum algorithm is composed of two parts: the first is the efficient creation of the quantum state corresponding to the projected Laplacian matrix, and the second consists of applying the existing quantum analogue of the kk-means algorithm. Both steps depend polynomially on the number of clusters, as well as precision and data parameters arising from quantum procedures, and polylogarithmically on the dimension of the input vectors. Our numerical simulations show an asymptotic linear growth with nn when all terms are taken into account, significantly better than the classical cubic growth. This work opens the path to other graph-based quantum machine learning algorithms, as it provides routines for efficient computation and quantum access to the Incidence, Adjacency, and projected Laplacian matrices of a graph

    Improvements in Quantum SDP-Solving with Applications

    Get PDF
    Following the first paper on quantum algorithms for SDP-solving by Brandão and Svore [Brandão and Svore, 2017] in 2016, rapid developments have been made on quantum optimization algorithms. In this paper we improve and generalize all prior quantum algorithms for SDP-solving and give a simpler and unified framework. We take a new perspective on quantum SDP-solvers and introduce several new techniques. One of these is the quantum operator input model, which generalizes the different input models used in previous work, and essentially any other reasonable input model. This new model assumes that the input matrices are embedded in a block of a unitary operator. In this model we give a O~((sqrt{m}+sqrt{n}gamma)alpha gamma^4) algorithm, where n is the size of the matrices, m is the number of constraints, gamma is the reciprocal of the scale-invariant relative precision parameter, and alpha is a normalization factor of the input matrices. In particular for the standard sparse-matrix access, the above result gives a quantum algorithm where alpha=s. We also improve on recent results of Brandão et al. [Fernando G. S. L. Brandão et al., 2018], who consider the special case w

    q-means: A quantum algorithm for unsupervised machine learning

    Full text link
    Quantum machine learning is one of the most promising applications of a full-scale quantum computer. Over the past few years, many quantum machine learning algorithms have been proposed that can potentially offer considerable speedups over the corresponding classical algorithms. In this paper, we introduce q-means, a new quantum algorithm for clustering which is a canonical problem in unsupervised machine learning. The qq-means algorithm has convergence and precision guarantees similar to kk-means, and it outputs with high probability a good approximation of the kk cluster centroids like the classical algorithm. Given a dataset of NN dd-dimensional vectors viv_i (seen as a matrix VRN×d)V \in \mathbb{R}^{N \times d}) stored in QRAM, the running time of q-means is O~(kdηδ2κ(V)(μ(V)+kηδ)+k2η1.5δ2κ(V)μ(V))\widetilde{O}\left( k d \frac{\eta}{\delta^2}\kappa(V)(\mu(V) + k \frac{\eta}{\delta}) + k^2 \frac{\eta^{1.5}}{\delta^2} \kappa(V)\mu(V) \right) per iteration, where κ(V)\kappa(V) is the condition number, μ(V)\mu(V) is a parameter that appears in quantum linear algebra procedures and η=maxivi2\eta = \max_{i} ||v_{i}||^{2}. For a natural notion of well-clusterable datasets, the running time becomes O~(k2dη2.5δ3+k2.5η2δ3)\widetilde{O}\left( k^2 d \frac{\eta^{2.5}}{\delta^3} + k^{2.5} \frac{\eta^2}{\delta^3} \right) per iteration, which is linear in the number of features dd, and polynomial in the rank kk, the maximum square norm η\eta and the error parameter δ\delta. Both running times are only polylogarithmic in the number of datapoints NN. Our algorithm provides substantial savings compared to the classical kk-means algorithm that runs in time O(kdN)O(kdN) per iteration, particularly for the case of large datasets

    Quantum-Inspired Sublinear Algorithm for Solving Low-Rank Semidefinite Programming

    Get PDF
    Semidefinite programming (SDP) is a central topic in mathematical optimization with extensive studies on its efficient solvers. In this paper, we present a proof-of-principle sublinear-time algorithm for solving SDPs with low-rank constraints; specifically, given an SDP with mm constraint matrices, each of dimension nn and rank rr, our algorithm can compute any entry and efficient descriptions of the spectral decomposition of the solution matrix. The algorithm runs in time O(mpoly(logn,r,1/ε))O(m\cdot\mathrm{poly}(\log n,r,1/\varepsilon)) given access to a sampling-based low-overhead data structure for the constraint matrices, where ε\varepsilon is the precision of the solution. In addition, we apply our algorithm to a quantum state learning task as an application. Technically, our approach aligns with 1) SDP solvers based on the matrix multiplicative weight (MMW) framework by Arora and Kale [TOC '12]; 2) sampling-based dequantizing framework pioneered by Tang [STOC '19]. In order to compute the matrix exponential required in the MMW framework, we introduce two new techniques that may be of independent interest: \bullet Weighted sampling: assuming sampling access to each individual constraint matrix A1,,AτA_{1},\ldots,A_{\tau}, we propose a procedure that gives a good approximation of A=A1++AτA=A_{1}+\cdots+A_{\tau}. \bullet Symmetric approximation: we propose a sampling procedure that gives the \emph{spectral decomposition} of a low-rank Hermitian matrix AA. To the best of our knowledge, this is the first sampling-based algorithm for spectral decomposition, as previous works only give singular values and vectors.Comment: 37 pages, 1 figure. To appear in the Proceedings of the 45th International Symposium on Mathematical Foundations of Computer Science (MFCS 2020

    Fast quantum subroutines for the simplex method

    Full text link
    We propose quantum subroutines for the simplex method that avoid classical computation of the basis inverse. For an m×nm \times n constraint matrix with at most dcd_c nonzero elements per column, at most dd nonzero elements per column or row of the basis, basis condition number κ\kappa, and optimality tolerance ϵ\epsilon, we show that pricing can be performed in O~(1ϵκdn(dcn+dm))\tilde{O}(\frac{1}{\epsilon}\kappa d \sqrt{n}(d_c n + d m)) time, where the O~\tilde{O} notation hides polylogarithmic factors. If the ratio n/mn/m is larger than a certain threshold, the running time of the quantum subroutine can be reduced to O~(1ϵκd1.5dcnm)\tilde{O}(\frac{1}{\epsilon}\kappa d^{1.5} \sqrt{d_c} n \sqrt{m}). The steepest edge pivoting rule also admits a quantum implementation, increasing the running time by a factor κ2\kappa^2. Classically, pricing requires O(dc0.7m1.9+m2+o(1)+dcn)O(d_c^{0.7} m^{1.9} + m^{2 + o(1)} + d_c n) time in the worst case using the fastest known algorithm for sparse matrix multiplication, and O(dc0.7m1.9+m2+o(1)+m2n)O(d_c^{0.7} m^{1.9} + m^{2 + o(1)} + m^2n) with steepest edge. Furthermore, we show that the ratio test can be performed in O~(tδκd2m1.5)\tilde{O}(\frac{t}{\delta} \kappa d^2 m^{1.5}) time, where t,δt, \delta determine a feasibility tolerance; classically, this requires O(m2)O(m^2) time in the worst case. For well-conditioned sparse problems the quantum subroutines scale better in mm and nn, and may therefore have a worst-case asymptotic advantage. An important feature of our paper is that this asymptotic speedup does not depend on the data being available in some "quantum form": the input of our quantum subroutines is the natural classical description of the problem, and the output is the index of the variables that should leave or enter the basis.Comment: Added discussion on condition number and infeasibilitie
    corecore