25 research outputs found
A Quantum Interior Point Method for LPs and SDPs
We present a quantum interior point method with worst case running time
for
SDPs and for LPs, where the output of our algorithm is a pair of matrices
that are -optimal -approximate SDP solutions. The factor
is at most for SDPs and for LP's, and is
an upper bound on the condition number of the intermediate solution matrices.
For the case where the intermediate matrices for the interior point method are
well conditioned, our method provides a polynomial speedup over the best known
classical SDP solvers and interior point based LP solvers, which have a worst
case running time of and respectively. Our results
build upon recently developed techniques for quantum linear algebra and pave
the way for the development of quantum algorithms for a variety of applications
in optimization and machine learning.Comment: 32 page
Randomized Row and Column Iterative Methods with a Quantum Computer
We consider the quantum implementations of the two classical iterative
solvers for a system of linear equations, including the Kaczmarz method which
uses a row of coefficient matrix in each iteration step, and the coordinate
descent method which utilizes a column instead. These two methods are widely
applied in big data science due to their very simple iteration schemes. In this
paper we use the block-encoding technique and propose fast quantum
implementations for these two approaches, under the assumption that the quantum
states of each row or each column can be efficiently prepared. The quantum
algorithms achieve exponential speed up at the problem size over the classical
versions, meanwhile their complexity is nearly linear at the number of steps
Quantum Spectral Clustering
Spectral clustering is a powerful unsupervised machine learning algorithm for
clustering data with non convex or nested structures. With roots in graph
theory, it uses the spectral properties of the Laplacian matrix to project the
data in a low-dimensional space where clustering is more efficient. Despite its
success in clustering tasks, spectral clustering suffers in practice from a
fast-growing running time of , where is the number of points in the
dataset. In this work we propose an end-to-end quantum algorithm performing
spectral clustering, extending a number of works in quantum machine learning.
The quantum algorithm is composed of two parts: the first is the efficient
creation of the quantum state corresponding to the projected Laplacian matrix,
and the second consists of applying the existing quantum analogue of the
-means algorithm. Both steps depend polynomially on the number of clusters,
as well as precision and data parameters arising from quantum procedures, and
polylogarithmically on the dimension of the input vectors. Our numerical
simulations show an asymptotic linear growth with when all terms are taken
into account, significantly better than the classical cubic growth. This work
opens the path to other graph-based quantum machine learning algorithms, as it
provides routines for efficient computation and quantum access to the
Incidence, Adjacency, and projected Laplacian matrices of a graph
Improvements in Quantum SDP-Solving with Applications
Following the first paper on quantum algorithms for SDP-solving by Brandão and Svore [Brandão and Svore, 2017] in 2016, rapid developments have been made on quantum optimization algorithms. In this paper we improve and generalize all prior quantum algorithms for SDP-solving and give a simpler and unified framework. We take a new perspective on quantum SDP-solvers and introduce several new techniques. One of these is the quantum operator input model, which generalizes the different input models used in previous work, and essentially any other reasonable input model. This new model assumes that the input matrices are embedded in a block of a unitary operator. In this model we give a O~((sqrt{m}+sqrt{n}gamma)alpha gamma^4) algorithm, where n is the size of the matrices, m is the number of constraints, gamma is the reciprocal of the scale-invariant relative precision parameter, and alpha is a normalization factor of the input matrices. In particular for the standard sparse-matrix access, the above result gives a quantum algorithm where alpha=s. We also improve on recent results of Brandão et al. [Fernando G. S. L. Brandão et al., 2018], who consider the special case w
q-means: A quantum algorithm for unsupervised machine learning
Quantum machine learning is one of the most promising applications of a
full-scale quantum computer. Over the past few years, many quantum machine
learning algorithms have been proposed that can potentially offer considerable
speedups over the corresponding classical algorithms. In this paper, we
introduce q-means, a new quantum algorithm for clustering which is a canonical
problem in unsupervised machine learning. The -means algorithm has
convergence and precision guarantees similar to -means, and it outputs with
high probability a good approximation of the cluster centroids like the
classical algorithm. Given a dataset of -dimensional vectors (seen
as a matrix stored in QRAM, the running time
of q-means is per iteration, where is the condition number, is
a parameter that appears in quantum linear algebra procedures and . For a natural notion of well-clusterable datasets, the
running time becomes per iteration, which is linear in the
number of features , and polynomial in the rank , the maximum square norm
and the error parameter . Both running times are only
polylogarithmic in the number of datapoints . Our algorithm provides
substantial savings compared to the classical -means algorithm that runs in
time per iteration, particularly for the case of large datasets
Quantum-Inspired Sublinear Algorithm for Solving Low-Rank Semidefinite Programming
Semidefinite programming (SDP) is a central topic in mathematical
optimization with extensive studies on its efficient solvers. In this paper, we
present a proof-of-principle sublinear-time algorithm for solving SDPs with
low-rank constraints; specifically, given an SDP with constraint matrices,
each of dimension and rank , our algorithm can compute any entry and
efficient descriptions of the spectral decomposition of the solution matrix.
The algorithm runs in time
given access to a sampling-based low-overhead data structure for the constraint
matrices, where is the precision of the solution. In addition, we
apply our algorithm to a quantum state learning task as an application.
Technically, our approach aligns with 1) SDP solvers based on the matrix
multiplicative weight (MMW) framework by Arora and Kale [TOC '12]; 2)
sampling-based dequantizing framework pioneered by Tang [STOC '19]. In order to
compute the matrix exponential required in the MMW framework, we introduce two
new techniques that may be of independent interest:
Weighted sampling: assuming sampling access to each individual
constraint matrix , we propose a procedure that gives a
good approximation of .
Symmetric approximation: we propose a sampling procedure that gives
the \emph{spectral decomposition} of a low-rank Hermitian matrix . To the
best of our knowledge, this is the first sampling-based algorithm for spectral
decomposition, as previous works only give singular values and vectors.Comment: 37 pages, 1 figure. To appear in the Proceedings of the 45th
International Symposium on Mathematical Foundations of Computer Science (MFCS
2020
Fast quantum subroutines for the simplex method
We propose quantum subroutines for the simplex method that avoid classical
computation of the basis inverse. For an constraint matrix with at
most nonzero elements per column, at most nonzero elements per column
or row of the basis, basis condition number , and optimality tolerance
, we show that pricing can be performed in
time, where the
notation hides polylogarithmic factors. If the ratio is
larger than a certain threshold, the running time of the quantum subroutine can
be reduced to . The steepest edge pivoting rule also admits a quantum
implementation, increasing the running time by a factor .
Classically, pricing requires
time in the worst case using the fastest known algorithm for sparse matrix
multiplication, and with steepest
edge. Furthermore, we show that the ratio test can be performed in
time, where
determine a feasibility tolerance; classically, this requires time in
the worst case. For well-conditioned sparse problems the quantum subroutines
scale better in and , and may therefore have a worst-case asymptotic
advantage. An important feature of our paper is that this asymptotic speedup
does not depend on the data being available in some "quantum form": the input
of our quantum subroutines is the natural classical description of the problem,
and the output is the index of the variables that should leave or enter the
basis.Comment: Added discussion on condition number and infeasibilitie