2,455 research outputs found

    Parameter Setting in Quantum Approximate Optimization of Weighted Problems

    Get PDF
    Quantum Approximate Optimization Algorithm (QAOA) is a leading candidate algorithm for solving combinatorial optimization problems on quantum computers. However, in many cases QAOA requires computationally intensive parameter optimization. The challenge of parameter optimization is particularly acute in the case of weighted problems, for which the eigenvalues of the phase operator are non-integer and the QAOA energy landscape is not periodic. In this work, we develop parameter setting heuristics for QAOA applied to a general class of weighted problems. First, we derive optimal parameters for QAOA with depth p=1p=1 applied to the weighted MaxCut problem under different assumptions on the weights. In particular, we rigorously prove the conventional wisdom that in the average case the first local optimum near zero gives globally-optimal QAOA parameters. Second, for p1p\geq 1 we prove that the QAOA energy landscape for weighted MaxCut approaches that for the unweighted case under a simple rescaling of parameters. Therefore, we can use parameters previously obtained for unweighted MaxCut for weighted problems. Finally, we prove that for p=1p=1 the QAOA objective sharply concentrates around its expectation, which means that our parameter setting rules hold with high probability for a random weighted instance. We numerically validate this approach on general weighted graphs and show that on average the QAOA energy with the proposed fixed parameters is only 1.11.1 percentage points away from that with optimized parameters. Third, we propose a general heuristic rescaling scheme inspired by the analytical results for weighted MaxCut and demonstrate its effectiveness using QAOA with the XY Hamming-weight-preserving mixer applied to the portfolio optimization problem. Our heuristic improves the convergence of local optimizers, reducing the number of iterations by 7.4x on average

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Polynomial Identity Testing and the Ideal Proof System: PIT is in NP if and only if IPS can be p-simulated by a Cook-Reckhow proof system

    Full text link
    The Ideal Proof System (IPS) of Grochow & Pitassi (FOCS 2014, J. ACM, 2018) is an algebraic proof system that uses algebraic circuits to refute the solvability of unsatisfiable systems of polynomial equations. One potential drawback of IPS is that verifying an IPS proof is only known to be doable using Polynomial Identity Testing (PIT), which is solvable by a randomized algorithm, but whose derandomization, even into NSUBEXP, is equivalent to strong lower bounds. However, the circuits that are used in IPS proofs are not arbitrary, and it is conceivable that one could get around general PIT by leveraging some structure in these circuits. This proposal may be even more tempting when IPS is used as a proof system for Boolean Unsatisfiability, where the equations themselves have additional structure. Our main result is that, on the contrary, one cannot get around PIT as above: we show that IPS, even as a proof system for Boolean Unsatisfiability, can be p-simulated by a deterministically verifiable (Cook-Reckhow) proof system if and only if PIT is in NP. We use our main result to propose a potentially new approach to derandomizing PIT into NP

    From a causal representation of multiloop scattering amplitudes to quantum computing in the Loop-Tree Duality

    Get PDF
    La teoría cúantica de campos con enfoque perturbativo ha logrado de manera exitosa proporcionar predicciones teóricas increíblemente precisas en física de altas energías. A pesar del desarrollo de diversas técnicas con el objetivo de incrementar la eficiencia de estos cálculos, algunos ingredientes continuan siendo un verdadero reto. Este es el caso de las amplitudes de dispersión con lazos múltiples, las cuales describen las fluctuaciones cuánticas en los procesos de dispersión a altas energías. La Dualidad Lazo-Árbol (LTD) es un método innovador, propuesto con el objetivo de afrontar estas dificultades abriendo las amplitudes de lazo a amplitudes conectadas de tipo árbol. En esta tesis presentamos tres logros fundamentales: la reformulación de la Dualidad Lazo-Árbol a todos los órdenes en la expansión perturbativa, una metodología general para obtener expresiones LTD con un comportamiento manifiestamente causal, y la primera aplicación de un algoritmo cuántico a integrales de lazo de Feynman. El cambio de estrategia propuesto para implementar la metodología LTD, consiste en la aplicación iterada del teorema del residuo de Cauchy a un conjunto de topologías con lazos m\'ultiples y configuraciones internas arbitrarias. La representación LTD que se obtiene, sigue una estructura factorizada en términos de subtopologías más simples, caracterizada por un comportamiento causal bien conocido. Además, a través de un proceso avanzado desarrollamos representaciones duales analíticas explícitamente libres de singularidades no causales. Estas propiedades permiten escribir cualquier amplitud de dispersión, hasta cinco lazos, de forma factorizada con una mejor estabilidad numérica en comparación con otras representaciones, debido a la ausencia de singularidades no causales. Por último, establecemos la conexión entre las integrales de lazo de Feynman y la computación cuántica, mediante la asociación de los dos estados sobre la capa de masas de un propagador de Feynman con los dos estados de un qubit. Proponemos una modificación del algoritmo cuántico de Grover para encontrar las configuraciones singulares causales de los diagramas de Feynman con lazos múltiples. Estas configuraciones son requeridas para establecer la representación causal de topologías con lazos múltiples.The perturbative approach to Quantum Field Theories has successfully provided incredibly accurate theoretical predictions in high-energy physics. Despite the development of several techniques to boost the efficiency of these calculations, some ingredients remain a hard bottleneck. This is the case of multiloop scattering amplitudes, describing the quantum fluctuations at high-energy scattering processes. The Loop-Tree Duality (LTD) is a novel method aimed to overcome these difficulties by opening the loop amplitudes into connected tree-level diagrams. In this thesis we present three core achievements: the reformulation of the Loop-Tree Duality to all orders in the perturbative expansion, a general methodology to obtain LTD expressions which are manifestly causal, and the first flagship application of a quantum algorithm to Feynman loop integrals. The proposed strategy to implement the LTD framework consists in the iterated application of the Cauchy's residue theorem to a series of mutiloop topologies with arbitrary internal configurations. We derive a LTD representation exhibiting a factorized cascade form in terms of simpler subtopologies characterized by a well-known causal behaviour. Moreover, through a clever approach we extract analytic dual representations that are explicitly free of noncausal singularities. These properties enable to open any scattering amplitude of up to five loops in a factorized form, with a better numerical stability than in other representations due to the absence of noncausal singularities. Last but not least, we establish the connection between Feynman loop integrals and quantum computing by encoding the two on-shell states of a Feynman propagator through the two states of a qubit. We propose a modified Grover's quantum algorithm to unfold the causal singular configurations of multiloop Feynman diagrams used to bootstrap the causal LTD representation of multiloop topologies

    Structured Semidefinite Programming for Recovering Structured Preconditioners

    Full text link
    We develop a general framework for finding approximately-optimal preconditioners for solving linear systems. Leveraging this framework we obtain improved runtimes for fundamental preconditioning and linear system solving problems including the following. We give an algorithm which, given positive definite KRd×d\mathbf{K} \in \mathbb{R}^{d \times d} with nnz(K)\mathrm{nnz}(\mathbf{K}) nonzero entries, computes an ϵ\epsilon-optimal diagonal preconditioner in time O~(nnz(K)poly(κ,ϵ1))\widetilde{O}(\mathrm{nnz}(\mathbf{K}) \cdot \mathrm{poly}(\kappa^\star,\epsilon^{-1})), where κ\kappa^\star is the optimal condition number of the rescaled matrix. We give an algorithm which, given MRd×d\mathbf{M} \in \mathbb{R}^{d \times d} that is either the pseudoinverse of a graph Laplacian matrix or a constant spectral approximation of one, solves linear systems in M\mathbf{M} in O~(d2)\widetilde{O}(d^2) time. Our diagonal preconditioning results improve state-of-the-art runtimes of Ω(d3.5)\Omega(d^{3.5}) attained by general-purpose semidefinite programming, and our solvers improve state-of-the-art runtimes of Ω(dω)\Omega(d^{\omega}) where ω>2.3\omega > 2.3 is the current matrix multiplication constant. We attain our results via new algorithms for a class of semidefinite programs (SDPs) we call matrix-dictionary approximation SDPs, which we leverage to solve an associated problem we call matrix-dictionary recovery.Comment: Merge of arXiv:1812.06295 and arXiv:2008.0172

    Implicit Loss of Surjectivity and Facial Reduction: Theory and Applications

    Get PDF
    Facial reduction, pioneered by Borwein and Wolkowicz, is a preprocessing method that is commonly used to obtain strict feasibility in the reformulated, reduced constraint system. The importance of strict feasibility is often addressed in the context of the convergence results for interior point methods. Beyond the theoretical properties that the facial reduction conveys, we show that facial reduction, not only limited to interior point methods, leads to strong numerical performances in different classes of algorithms. In this thesis we study various consequences and the broad applicability of facial reduction. The thesis is organized in two parts. In the first part, we show the instabilities accompanied by the absence of strict feasibility through the lens of facially reduced systems. In particular, we exploit the implicit redundancies, revealed by each nontrivial facial reduction step, resulting in the implicit loss of surjectivity. This leads to the two-step facial reduction and two novel related notions of singularity. For the area of semidefinite programming, we use these singularities to strengthen a known bound on the solution rank, the Barvinok-Pataki bound. For the area of linear programming, we reveal degeneracies caused by the implicit redundancies. Furthermore, we propose a preprocessing tool that uses the simplex method. In the second part of this thesis, we continue with the semidefinite programs that do not have strictly feasible points. We focus on the doubly-nonnegative relaxation of the binary quadratic program and a semidefinite program with a nonlinear objective function. We closely work with two classes of algorithms, the splitting method and the Gauss-Newton interior point method. We elaborate on the advantages in building models from facial reduction. Moreover, we develop algorithms for real-world problems including the quadratic assignment problem, the protein side-chain positioning problem, and the key rate computation for quantum key distribution. Facial reduction continues to play an important role for providing robust reformulated models in both the theoretical and the practical aspects, resulting in successful numerical performances

    Analog Photonics Computing for Information Processing, Inference and Optimisation

    Full text link
    This review presents an overview of the current state-of-the-art in photonics computing, which leverages photons, photons coupled with matter, and optics-related technologies for effective and efficient computational purposes. It covers the history and development of photonics computing and modern analogue computing platforms and architectures, focusing on optimization tasks and neural network implementations. The authors examine special-purpose optimizers, mathematical descriptions of photonics optimizers, and their various interconnections. Disparate applications are discussed, including direct encoding, logistics, finance, phase retrieval, machine learning, neural networks, probabilistic graphical models, and image processing, among many others. The main directions of technological advancement and associated challenges in photonics computing are explored, along with an assessment of its efficiency. Finally, the paper discusses prospects and the field of optical quantum computing, providing insights into the potential applications of this technology.Comment: Invited submission by Journal of Advanced Quantum Technologies; accepted version 5/06/202
    corecore