105 research outputs found

    Threshold-Based Quantum Optimization

    Full text link
    We propose and study Th-QAOA (pronounced Threshold QAOA), a variation of the Quantum Alternating Operator Ansatz (QAOA) that replaces the standard phase separator operator, which encodes the objective function, with a threshold function that returns a value 11 for solutions with an objective value above the threshold and a 00 otherwise. We vary the threshold value to arrive at a quantum optimization algorithm. We focus on a combination with the Grover Mixer operator; the resulting GM-Th-QAOA can be viewed as a generalization of Grover's quantum search algorithm and its minimum/maximum finding cousin to approximate optimization. Our main findings include: (i) we show semi-formally that the optimum parameter values of GM-Th-QAOA (angles and threshold value) can be found with O(log(p)×logM)O(\log(p) \times \log M) iterations of the classical outer loop, where pp is the number of QAOA rounds and MM is an upper bound on the solution value (often the number of vertices or edges in an input graph), thus eliminating the notorious outer-loop parameter finding issue of other QAOA algorithms; (ii) GM-Th-QAOA can be simulated classically with little effort up to 100 qubits through a set of tricks that cut down memory requirements; (iii) somewhat surprisingly, GM-Th-QAOA outperforms its non-thresholded counterparts in terms of approximation ratios achieved. This third result holds across a range of optimization problems (MaxCut, Max k-VertexCover, Max k-DensestSubgraph, MaxBisection) and various experimental design parameters, such as different input edge densities and constraint sizes

    A Birthday Repetition Theorem and Complexity of Approximating Dense CSPs

    Get PDF
    A (k×l)(k \times l)-birthday repetition Gk×l\mathcal{G}^{k \times l} of a two-prover game G\mathcal{G} is a game in which the two provers are sent random sets of questions from G\mathcal{G} of sizes kk and ll respectively. These two sets are sampled independently uniformly among all sets of questions of those particular sizes. We prove the following birthday repetition theorem: when G\mathcal{G} satisfies some mild conditions, val(Gk×l)val(\mathcal{G}^{k \times l}) decreases exponentially in Ω(kl/n)\Omega(kl/n) where nn is the total number of questions. Our result positively resolves an open question posted by Aaronson, Impagliazzo and Moshkovitz (CCC 2014). As an application of our birthday repetition theorem, we obtain new fine-grained hardness of approximation results for dense CSPs. Specifically, we establish a tight trade-off between running time and approximation ratio for dense CSPs by showing conditional lower bounds, integrality gaps and approximation algorithms. In particular, for any sufficiently large ii and for every k2k \geq 2, we show the following results: - We exhibit an O(q1/i)O(q^{1/i})-approximation algorithm for dense Max kk-CSPs with alphabet size qq via Ok(i)O_k(i)-level of Sherali-Adams relaxation. - Through our birthday repetition theorem, we obtain an integrality gap of q1/iq^{1/i} for Ω~k(i)\tilde\Omega_k(i)-level Lasserre relaxation for fully-dense Max kk-CSP. - Assuming that there is a constant ϵ>0\epsilon > 0 such that Max 3SAT cannot be approximated to within (1ϵ)(1-\epsilon) of the optimal in sub-exponential time, our birthday repetition theorem implies that any algorithm that approximates fully-dense Max kk-CSP to within a q1/iq^{1/i} factor takes (nq)Ω~k(i)(nq)^{\tilde \Omega_k(i)} time, almost tightly matching the algorithmic result based on Sherali-Adams relaxation.Comment: 45 page

    Quantum-inspired classical algorithm for graph problems by Gaussian boson sampling

    Full text link
    We present a quantum-inspired classical algorithm that can be used for graph-theoretical problems, such as finding the densest kk-subgraph and finding the maximum weight clique, which are proposed as applications of a Gaussian boson sampler. The main observation from Gaussian boson samplers is that a given graph's adjacency matrix to be encoded in a Gaussian boson sampler is nonnegative, which does not necessitate quantum interference. We first provide how to program a given graph problem into our efficient classical algorithm. We then numerically compare the performance of ideal and lossy Gaussian boson samplers, our quantum-inspired classical sampler, and the uniform sampler for finding the densest kk-subgraph and finding the maximum weight clique and show that the advantage from Gaussian boson samplers is not significant in general. We finally discuss the potential advantage of a Gaussian boson sampler over the proposed sampler.Comment: 11 pages, 5 figure

    The power of sum-of-squares for detecting hidden structures

    Full text link
    We study planted problems---finding hidden structures in random noisy inputs---through the lens of the sum-of-squares semidefinite programming hierarchy (SoS). This family of powerful semidefinite programs has recently yielded many new algorithms for planted problems, often achieving the best known polynomial-time guarantees in terms of accuracy of recovered solutions and robustness to noise. One theme in recent work is the design of spectral algorithms which match the guarantees of SoS algorithms for planted problems. Classical spectral algorithms are often unable to accomplish this: the twist in these new spectral algorithms is the use of spectral structure of matrices whose entries are low-degree polynomials of the input variables. We prove that for a wide class of planted problems, including refuting random constraint satisfaction problems, tensor and sparse PCA, densest-k-subgraph, community detection in stochastic block models, planted clique, and others, eigenvalues of degree-d matrix polynomials are as powerful as SoS semidefinite programs of roughly degree d. For such problems it is therefore always possible to match the guarantees of SoS without solving a large semidefinite program. Using related ideas on SoS algorithms and low-degree matrix polynomials (and inspired by recent work on SoS and the planted clique problem by Barak et al.), we prove new nearly-tight SoS lower bounds for the tensor and sparse principal component analysis problems. Our lower bounds for sparse principal component analysis are the first to suggest that going beyond existing algorithms for this problem may require sub-exponential time

    A Comparison of Quantum Algorithms for the Maximum Clique Problem

    Get PDF
    Two of the most promising computational models for quantum computing are the qubit-based model and the continuous variable model, which result in two different computational approaches, namely the qubit gate model and boson sampling. The qubit gate model is a universal form of quantum computation that relies heavily on the principles of superposition and entanglement to solve problems using qubits based on technologies ranging from magnetic fields created from superconducting materials to the spins of valence electrons in atoms. Boson sampling is a non-universal form of quantum computation that uses bosons as continuous-variable values for its computation. Both models show promising prospects for useful quantum advantages over classical computers, but these models are fundamentally different, not only on their technologies but on their applications. Each model excels in different sets of applications. A direct comparison for solving a problem using qubit gate models and boson sampling allows one to better understand not only the individual technologies, but how to decide which model is better suited to solving a given problem and how to start development on solving the given problem. This thesis uses the maximum clique problem to examine the application development process in the qubit gate model and boson sampling as well as a comparison of other known algorithms to the maximum clique problem. The maximum clique problem is an NP-Hard problem concerned with finding the largest fully-connected subgraph. The qubit gate model algorithm to the maximum clique problem is a novel algorithm

    Rounding Sum-of-Squares Relaxations

    Get PDF
    We present a general approach to rounding semidefinite programming relaxations obtained by the Sum-of-Squares method (Lasserre hierarchy). Our approach is based on using the connection between these relaxations and the Sum-of-Squares proof system to transform a *combining algorithm* -- an algorithm that maps a distribution over solutions into a (possibly weaker) solution -- into a *rounding algorithm* that maps a solution of the relaxation to a solution of the original problem. Using this approach, we obtain algorithms that yield improved results for natural variants of three well-known problems: 1) We give a quasipolynomial-time algorithm that approximates the maximum of a low degree multivariate polynomial with non-negative coefficients over the Euclidean unit sphere. Beyond being of interest in its own right, this is related to an open question in quantum information theory, and our techniques have already led to improved results in this area (Brand\~{a}o and Harrow, STOC '13). 2) We give a polynomial-time algorithm that, given a d dimensional subspace of R^n that (almost) contains the characteristic function of a set of size n/k, finds a vector vv in the subspace satisfying v44>c(k/d1/3)v22|v|_4^4 > c(k/d^{1/3}) |v|_2^2, where vp=(Eivip)1/p|v|_p = (E_i v_i^p)^{1/p}. Aside from being a natural relaxation, this is also motivated by a connection to the Small Set Expansion problem shown by Barak et al. (STOC 2012) and our results yield a certain improvement for that problem. 3) We use this notion of L_4 vs. L_2 sparsity to obtain a polynomial-time algorithm with substantially improved guarantees for recovering a planted μ\mu-sparse vector v in a random d-dimensional subspace of R^n. If v has mu n nonzero coordinates, we can recover it with high probability whenever μ<O(min(1,n/d2))\mu < O(\min(1,n/d^2)), improving for d<n2/3d < n^{2/3} prior methods which intrinsically required μ<O(1/(d))\mu < O(1/\sqrt(d))

    Efficient Distribution of Quantum Circuits

    Get PDF
    corecore