479 research outputs found

    Quantum-inspired algorithm for general minimum conical hull problems

    Full text link

    Sampling-based sublinear low-rank matrix arithmetic framework for dequantizing quantum machine learning

    Get PDF
    We present an algorithmic framework for quantum-inspired classical algorithms on close-to-low-rank matrices, generalizing the series of results started by Tang’s breakthrough quantum-inspired algorithm for recommendation systems [STOC’19]. Motivated by quantum linear algebra algorithms and the quantum singular value transformation (SVT) framework of Gilyén et al. [STOC’19], we develop classical algorithms for SVT that run in time independent of input dimension, under suitable quantum-inspired sampling assumptions. Our results give compelling evidence that in the corresponding QRAM data structure input model, quantum SVT does not yield exponential quantum speedups. Since the quantum SVT framework generalizes essentially all known techniques for quantum linear algebra, our results, combined with sampling lemmas from previous work, suffices to generalize all recent results about dequantizing quantum machine learning algorithms. In particular, our classical SVT framework recovers and often improves the dequantization results on recommendation systems, principal component analysis, supervised clustering, support vector machines, low-rank regression, and semidefinite program solving. We also give additional dequantization results on low-rank Hamiltonian simulation and discriminant analysis. Our improvements come from identifying the key feature of the quantum-inspired input model that is at the core of all prior quantum-inspired results: ℓ²-norm sampling can approximate matrix products in time independent of their dimension. We reduce all our main results to this fact, making our exposition concise, self-contained, and intuitive

    On the foundations and extremal structure of the holographic entropy cone

    Full text link
    The holographic entropy cone (HEC) is a polyhedral cone first introduced in the study of a class of quantum entropy inequalities. It admits a graph-theoretic description in terms of minimum cuts in weighted graphs, a characterization which naturally generalizes the cut function for complete graphs. Unfortunately, no complete facet or extreme-ray representation of the HEC is known. In this work, starting from a purely graph-theoretic perspective, we develop a theoretical and computational foundation for the HEC. The paper is self-contained, giving new proofs of known results and proving several new results as well. These are also used to develop two systematic approaches for finding the facets and extreme rays of the HEC, which we illustrate by recomputing the HEC on 55 terminals and improving its graph description. Some interesting open problems are stated throughout.Comment: 32 pages, 2 figures, 3 tables. Revised to expand the description of the connection to quantum information theory, including additional reference

    Quantum-Inspired Algorithms for Solving Low-Rank Linear Equation Systems with Logarithmic Dependence on the Dimension

    Get PDF
    We present two efficient classical analogues of the quantum matrix inversion algorithm [16] for low-rank matrices. Inspired by recent work of Tang [27], assuming length-square sampling access to input data, we implement the pseudoinverse of a low-rank matrix allowing us to sample from the solution to the problem Ax = b using fast sampling techniques. We construct implicit descriptions of the pseudo-inverse by finding approximate singular value decomposition of A via subsampling, then inverting the singular values. In principle, our approaches can also be used to apply any desired “smooth” function to the singular values. Since many quantum algorithms can be expressed as a singular value transformation problem [15], our results indicate that more low-rank quantum algorithms can be effectively “dequantised” into classical length-square sampling algorithms

    Quantum Algorithm for Finding the Negative Curvature Direction

    Get PDF
    Non-convex optimization is an essential problem in the field of machine learning. Optimization methods for non-convex problems can be roughly di- vided into first-order methods and second-order methods, depending on the or- der of the derivative to the objective function they used. Generally, to find the local minima, the second-order methods are applied to find the effective direction to escape the saddle point. Specifically, finding the Negative Curvature is considered as the subroutine to analyze the characteristic of the saddle point. However, the calculation of the Negative Curvature is expensive, which prevents the practical usage of second-order algorithms. In this thesis, we present an efficient quantum algorithm aiming to find the negative curvature direction for escaping the saddle point, which is a critical subroutine for many second-order non-convex optimization algorithms. We prove that our algorithm could produce the target state corresponding to the negative curvature direction with query complexity O ̃(polylog(d) ε^(-1)), where d is the dimension of the optimization function. The quantum negative curvature finding algorithm is exponentially faster than any known classical method, which takes time at least O(dε^(-1/2)). Moreover, we propose an efficient quantum algorithm to achieve the classical read-out of the target state. Our classical read-out algorithm runs exponentially faster on the degree of d than existing counterparts

    Quantum differentially private sparse regression learning

    Full text link
    Differentially private (DP) learning, which aims to accurately extract patterns from the given dataset without exposing individual information, is an important subfield in machine learning and has been extensively explored. However, quantum algorithms that could preserve privacy, while outperform their classical counterparts, are still lacking. The difficulty arises from the distinct priorities in DP and quantum machine learning, i.e., the former concerns a low utility bound while the latter pursues a low runtime cost. These varied goals request that the proposed quantum DP algorithm should achieve the runtime speedup over the best known classical results while preserving the optimal utility bound. The Lasso estimator is broadly employed to tackle the high dimensional sparse linear regression tasks. The main contribution of this paper is devising a quantum DP Lasso estimator to earn the runtime speedup with the privacy preservation, i.e., the runtime complexity is O~(N3/2d)\tilde{O}(N^{3/2}\sqrt{d}) with a nearly optimal utility bound O~(1/N2/3)\tilde{O}(1/N^{2/3}), where NN is the sample size and dd is the data dimension with NdN\ll d. Since the optimal classical (private) Lasso takes Ω(N+d)\Omega(N+d) runtime, our proposal achieves quantum speedups when N<O(d1/3)N<O(d^{1/3}). There are two key components in our algorithm. First, we extend the Frank-Wolfe algorithm from the classical Lasso to the quantum scenario, {where the proposed quantum non-private Lasso achieves a quadratic runtime speedup over the optimal classical Lasso.} Second, we develop an adaptive privacy mechanism to ensure the privacy guarantee of the non-private Lasso. Our proposal opens an avenue to design various learning tasks with both the proven runtime speedups and the privacy preservation
    corecore