25 research outputs found

    Improved Quantum Boosting

    Get PDF
    Boosting is a general method to convert a weak learner (which generates hypotheses that are just slightly better than random) into a strong learner (which generates hypotheses that are much better than random). Recently, Arunachalam and Maity [5] gave the first quantum improvement for boosting, by combining Freund and Schapire’s AdaBoost algorithm with a quantum algorithm for approximate counting. Their booster is faster than classical boosting as a function of the VC-dimension of the weak learner’s hypothesis class, but worse as a function of the quality of the weak learner. In this paper we give a substantially faster and simpler quantum boosting algorithm, based on Servedio’s SmoothBoost algorithm [22]

    Quantum Boosting using Domain-Partitioning Hypotheses

    Full text link
    Boosting is an ensemble learning method that converts a weak learner into a strong learner in the PAC learning framework. Freund and Schapire gave the first classical boosting algorithm for binary hypothesis known as AdaBoost, and this was recently adapted into a quantum boosting algorithm by Arunachalam et al. Their quantum boosting algorithm (which we refer to as Q-AdaBoost) is quadratically faster than the classical version in terms of the VC-dimension of the hypothesis class of the weak learner but polynomially worse in the bias of the weak learner. In this work we design a different quantum boosting algorithm that uses domain partitioning hypotheses that are significantly more flexible than those used in prior quantum boosting algorithms in terms of margin calculations. Our algorithm Q-RealBoost is inspired by the "Real AdaBoost" (aka. RealBoost) extension to the original AdaBoost algorithm. Further, we show that Q-RealBoost provides a polynomial speedup over Q-AdaBoost in terms of both the bias of the weak learner and the time taken by the weak learner to learn the target concept class.Comment: 24 pages, 3 figures, 1 tabl

    Quantum algorithm for robust optimization via stochastic-gradient online learning

    Full text link
    Optimization theory has been widely studied in academia and finds a large variety of applications in industry. The different optimization models in their discrete and/or continuous settings has catered to a rich source of research problems. Robust convex optimization is a branch of optimization theory in which the variables or parameters involved have a certain level of uncertainty. In this work, we consider the online robust optimization meta-algorithm by Ben-Tal et al. and show that for a large range of stochastic subgradients, this algorithm has the same guarantee as the original non-stochastic version. We develop a quantum version of this algorithm and show that an at most quadratic improvement in terms of the dimension can be achieved. The speedup is due to the use of quantum state preparation, quantum norm estimation, and quantum multi-sampling. We apply our quantum meta-algorithm to examples such as robust linear programs and robust semidefinite programs and give applications of these robust optimization problems in finance and engineering.Comment: 21 page

    Quantum adiabatic machine learning

    Full text link
    We develop an approach to machine learning and anomaly detection via quantum adiabatic evolution. In the training phase we identify an optimal set of weak classifiers, to form a single strong classifier. In the testing phase we adiabatically evolve one or more strong classifiers on a superposition of inputs in order to find certain anomalous elements in the classification space. Both the training and testing phases are executed via quantum adiabatic evolution. We apply and illustrate this approach in detail to the problem of software verification and validation.Comment: 21 pages, 9 figure

    Quantum exploration algorithms for multi-armed bandits

    Full text link
    Identifying the best arm of a multi-armed bandit is a central problem in bandit optimization. We study a quantum computational version of this problem with coherent oracle access to states encoding the reward probabilities of each arm as quantum amplitudes. Specifically, we show that we can find the best arm with fixed confidence using O~(∑i=2nΔi−2)\tilde{O}\bigl(\sqrt{\sum_{i=2}^n\Delta^{\smash{-2}}_i}\bigr) quantum queries, where Δi\Delta_{i} represents the difference between the mean reward of the best arm and the ithi^\text{th}-best arm. This algorithm, based on variable-time amplitude amplification and estimation, gives a quadratic speedup compared to the best possible classical result. We also prove a matching quantum lower bound (up to poly-logarithmic factors).Comment: 18 pages, 1 figure. To appear in the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI 2021

    Quantum computing for finance

    Full text link
    Quantum computers are expected to surpass the computational capabilities of classical computers and have a transformative impact on numerous industry sectors. We present a comprehensive summary of the state of the art of quantum computing for financial applications, with particular emphasis on stochastic modeling, optimization, and machine learning. This Review is aimed at physicists, so it outlines the classical techniques used by the financial industry and discusses the potential advantages and limitations of quantum techniques. Finally, we look at the challenges that physicists could help tackle

    Quantum algorithms for matrix scaling and matrix balancing

    Get PDF
    Matrix scaling and matrix balancing are two basic linear-algebraic problems with a wide variety of applications, such as approximating the permanent, and pre-conditioning linear systems to make them more numerically stable. We study the power and limitations of quantum algorithms for these problems. We provide quantum implementations of two classical (in both senses of the word) methods: Sinkhorn's algorithm for matrix scaling and Osborne's algorithm for matrix balancing. Using amplitude estimation as our main tool, our quantum implementations both run in time Õ(√mn/∈4) for scaling or balancing an n×n matrix (given by an oracle) with m non-zero entries to within ℓ1-error ∈. Their classical analogs use time Õ(m/∈2), and every classical algorithm for scaling or balancing with small constant ∈ requires Ω(m) queries to the entries of the input matrix. We thus achieve a polynomial speed-up in terms of n, at the expense of a worse polynomial dependence on the obtained ℓ1-error ∈. Even for constant ∈ these problems are already non-trivial (and relevant in applications). Along the way, we extend the classical analysis of Sinkhorn's and Osborne's algorithm to allow for errors in the computation of marginals. We also adapt an improved analysis of Sinkhorn's algorithm for entrywise-positive matrices to the ℓ1-setting, obtaining an Õ(n1.5/∈3)-time quantum algorithm for ∈-ℓ1-scaling. We also prove a lower bound, showing our quantum algorithm for matrix scaling is essentially optimal for constant ∈: every quantum algorithm for matrix scaling that achieves a constant ℓ1-error w.r.t. uniform marginals needs Ω(√ mn) queries
    corecore