21 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    Algorithms for Near-Term and Noisy Quantum Devices

    Get PDF
    Quantum computing promises to revolutionise many fields, including chemical simulations and machine learning. At the present moment those promises have not been realised, due to the large resource requirements of fault tolerant quantum computers, not excepting the scientific and engineering challenges to building a fault tolerant quantum computer. Instead, we currently have access to quantum devices that are both limited in qubit number, and have noisy qubits. This thesis deals with the challenges that these devices present, by investigating applications in quantum simulation for molecules and solid state systems, quantum machine learning, and by presenting a detailed simulation of a real ion trap device. We firstly build on a previous algorithm for state discrimination using a quantum machine learning model, and we show how to adapt the algorithm to work on a noisy device. This algorithm outperforms the analytical best POVM if ran on a noisy device. We then discuss how to build a quantum perceptron - the building block of a quantum neural network. We also present an algorithm for simulating the Dynamical Mean Field Theory (DMFT) using a quantum device, for two sites. We also discuss some of the difficul- ties found in scaling up that system, and present an algorithm for building the DMFT ansatz using the quantum device. We also discuss modifications to the algorithm that make it more ‘device-aware’. Finally we present a pule-level simulation of the noise in an ion trap device, designed to match the specifications of a device at the National Physical Laboratory (NPL), which we can use to direct future experimental focus. Each of these sections is preceded by a review of the relevant literature

    LIPIcs, Volume 274, ESA 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 274, ESA 2023, Complete Volum

    Sublinear Computation Paradigm

    Get PDF
    This open access book gives an overview of cutting-edge work on a new paradigm called the “sublinear computation paradigm,” which was proposed in the large multiyear academic research project “Foundations of Innovative Algorithms for Big Data.” That project ran from October 2014 to March 2020, in Japan. To handle the unprecedented explosion of big data sets in research, industry, and other areas of society, there is an urgent need to develop novel methods and approaches for big data analysis. To meet this need, innovative changes in algorithm theory for big data are being pursued. For example, polynomial-time algorithms have thus far been regarded as “fast,” but if a quadratic-time algorithm is applied to a petabyte-scale or larger big data set, problems are encountered in terms of computational resources or running time. To deal with this critical computational and algorithmic bottleneck, linear, sublinear, and constant time algorithms are required. The sublinear computation paradigm is proposed here in order to support innovation in the big data era. A foundation of innovative algorithms has been created by developing computational procedures, data structures, and modelling techniques for big data. The project is organized into three teams that focus on sublinear algorithms, sublinear data structures, and sublinear modelling. The work has provided high-level academic research results of strong computational and algorithmic interest, which are presented in this book. The book consists of five parts: Part I, which consists of a single chapter on the concept of the sublinear computation paradigm; Parts II, III, and IV review results on sublinear algorithms, sublinear data structures, and sublinear modelling, respectively; Part V presents application results. The information presented here will inspire the researchers who work in the field of modern algorithms

    Quantum and Classical Multilevel Algorithms for (Hyper)Graphs

    Get PDF
    Combinatorial optimization problems on (hyper)graphs are ubiquitous in science and industry. Because many of these problems are NP-hard, development of sophisticated heuristics is of utmost importance for practical problems. In recent years, the emergence of Noisy Intermediate-Scale Quantum (NISQ) computers has opened up the opportunity to dramaticaly speedup combinatorial optimization. However, the adoption of NISQ devices is impeded by their severe limitations, both in terms of the number of qubits, as well as in their quality. NISQ devices are widely expected to have no more than hundreds to thousands of qubits with very limited error-correction, imposing a strict limit on the size and the structure of the problems that can be tackled directly. A natural solution to this issue is hybrid quantum-classical algorithms that combine a NISQ device with a classical machine with the goal of capturing “the best of both worlds”. Being motivated by lack of high quality optimization solvers for hypergraph partitioning, in this thesis, we begin by discussing classical multilevel approaches for this problem. We present a novel relaxation-based vertex similarity measure termed algebraic distance for hypergraphs and the coarsening schemes based on it. Extending the multilevel method to include quantum optimization routines, we present Quantum Local Search (QLS) – a hybrid iterative improvement approach that is inspired by the classical local search approaches. Next, we introduce the Multilevel Quantum Local Search (ML-QLS) that incorporates the quantum-enhanced iterative improvement scheme introduced in QLS within the multilevel framework, as well as several techniques to further understand and improve the effectiveness of Quantum Approximate Optimization Algorithm used throughout our work

    Compressing and Performing Algorithms on Massively Large Networks

    Get PDF
    Networks are represented as a set of nodes (vertices) and the arcs (links) connecting them. Such networks can model various real-world structures such as social networks (e.g., Facebook), information networks (e.g., citation networks), technological networks (e.g., the Internet), and biological networks (e.g., gene-phenotype network). Analysis of such structures is a heavily studied area with many applications. However, in this era of big data, we find ourselves with networks so massive that the space requirements inhibit network analysis. Since many of these networks have nodes and arcs on the order of billions to trillions, even basic data structures such as adjacency lists could cost petabytes to zettabytes of storage. Storing these networks in secondary memory would require I/O access (i.e., disk access) during analysis, thus drastically slowing analysis time. To perform analysis efficiently on such extensive data, we either need enough main memory for the data structures and algorithms, or we need to develop compressions that require much less space while still being able to answer queries efficiently. In this dissertation, we develop several compression techniques that succinctly represent these real-world networks while still being able to efficiently query the network (e.g., check if an arc exists between two nodes). Furthermore, since many of these networks continue to grow over time, our compression techniques also support the ability to add and remove nodes and edges directly on the compressed structure. We also provide a way to compress the data quickly without any intermediate structure, thus giving minimal memory overhead. We provide detailed analysis and prove that our compression is indeed succinct (i.e., achieves the information-theoretic lower bound). Also, we empirically show that our compression rates outperform or are equal to existing compression algorithms on many benchmark datasets. We also extend our technique to time-evolving networks. That is, we store the entire state of the network at each time frame. Studying time-evolving networks allows us to find patterns throughout the time that would not be available in regular, static network analysis. A succinct representation for time-evolving networks is arguably more important than static graphs, due to the extra dimension inflating the space requirements of basic data structures even more. Again, we manage to achieve succinctness while also providing fast encoding, minimal memory overhead during encoding, fast queries, and fast, direct modification. We also compare against several benchmarks and empirically show that we achieve compression rates better than or equal to the best performing benchmark for each dataset. Finally, we also develop both static and time-evolving algorithms that run directly on our compressed structures. Using our static graph compression combined with our differential technique, we find that we can speed up matrix-vector multiplication by reusing previously computed products. We compare our results against a similar technique using the Webgraph Framework, and we see that not only are our base query speeds faster, but we also gain a more significant speed-up from reusing products. Then, we use our time-evolving compression to solve the earliest arrival paths problem and time-evolving transitive closure. We found that not only were we the first to run such algorithms directly on compressed data, but that our technique was particularly efficient at doing so

    27th Annual European Symposium on Algorithms: ESA 2019, September 9-11, 2019, Munich/Garching, Germany

    Get PDF

    Quantum Algorithms for Scientific Computing and Approximate Optimization

    Get PDF
    Quantum computation appears to offer significant advantages over classical computation and this has generated a tremendous interest in the field. In this thesis we study the application of quantum computers to computational problems in science and engineering, and to combinatorial optimization problems. We outline the results below. Algorithms for scientific computing require modules, i.e., building blocks, implementing elementary numerical functions that have well-controlled numerical error, are uniformly scalable and reversible, and that can be implemented efficiently. We derive quantum algorithms and circuits for computing square roots, logarithms, and arbitrary fractional powers, and derive worst-case error and cost bounds. We describe a modular approach to quantum algorithm design as a first step towards numerical standards and mathematical libraries for quantum scientific computing. A fundamental but computationally hard problem in physics is to solve the time-independent Schrödinger equation. This is accomplished by computing the eigenvalues of the corresponding Hamiltonian operator. The eigenvalues describe the different energy levels of a system. The cost of classical deterministic algorithms computing these eigenvalues grows exponentially with the number of system degrees of freedom. The number of degrees of freedom is typically proportional to the number of particles in a physical system. We show an efficient quantum algorithm for approximating a constant number of low-order eigenvalues of a Hamiltonian using a perturbation approach. We apply this algorithm to a special case of the Schrödinger equation and show that our algorithm succeeds with high probability, and has cost that scales polynomially with the number of degrees of freedom and the reciprocal of the desired accuracy. This improves and extends earlier results on quantum algorithms for estimating the ground state energy. We consider the simulation of quantum mechanical systems on a quantum computer. We show a novel divide and conquer approach for Hamiltonian simulation. Using the Hamiltonian structure, we can obtain faster simulation algorithms. Considering a sum of Hamiltonians we split them into groups, simulate each group separately, and combine the partial results. Simulation is customized to take advantage of the properties of each group, and hence yield refined bounds to the overall simulation cost. We illustrate our results using the electronic structure problem of quantum chemistry, where we obtain significantly improved cost estimates under mild assumptions. We turn to combinatorial optimization problems. An important open question is whether quantum computers provide advantages for the approximation of classically hard combinatorial problems. A promising recently proposed approach of Farhi et al. is the Quantum Approximate Optimization Algorithm (QAOA). We study the application of QAOA to the Maximum Cut problem, and derive analytic performance bounds for the lowest circuit-depth realization, for both general and special classes of graphs. Along the way, we develop a general procedure for analyzing the performance of QAOA for other problems, and show an example demonstrating the difficulty of obtaining similar results for greater depth. We show a generalization of QAOA and its application to wider classes of combinatorial optimization problems, in particular, problems with feasibility constraints. We introduce the Quantum Alternating Operator Ansatz, which utilizes more general unitary operators than the original QAOA proposal. Our framework facilitates low-resource implementations for many applications which may be particularly suitable for early quantum computers. We specify design criteria, and develop a set of results and tools for mapping diverse problems to explicit quantum circuits. We derive constructions for several important prototypical problems including Maximum Independent Set, Graph Coloring, and the Traveling Salesman problem, and show appealing resource cost estimates for their implementations
    corecore