15,515 research outputs found

    Quantum algorithm for robust optimization via stochastic-gradient online learning

    Full text link
    Optimization theory has been widely studied in academia and finds a large variety of applications in industry. The different optimization models in their discrete and/or continuous settings has catered to a rich source of research problems. Robust convex optimization is a branch of optimization theory in which the variables or parameters involved have a certain level of uncertainty. In this work, we consider the online robust optimization meta-algorithm by Ben-Tal et al. and show that for a large range of stochastic subgradients, this algorithm has the same guarantee as the original non-stochastic version. We develop a quantum version of this algorithm and show that an at most quadratic improvement in terms of the dimension can be achieved. The speedup is due to the use of quantum state preparation, quantum norm estimation, and quantum multi-sampling. We apply our quantum meta-algorithm to examples such as robust linear programs and robust semidefinite programs and give applications of these robust optimization problems in finance and engineering.Comment: 21 page

    Natural evolution strategies and variational Monte Carlo

    Full text link
    A notion of quantum natural evolution strategies is introduced, which provides a geometric synthesis of a number of known quantum/classical algorithms for performing classical black-box optimization. Recent work of Gomes et al. [2019] on heuristic combinatorial optimization using neural quantum states is pedagogically reviewed in this context, emphasizing the connection with natural evolution strategies. The algorithmic framework is illustrated for approximate combinatorial optimization problems, and a systematic strategy is found for improving the approximation ratios. In particular it is found that natural evolution strategies can achieve approximation ratios competitive with widely used heuristic algorithms for Max-Cut, at the expense of increased computation time

    Semistochastic Quadratic Bound Methods

    Full text link
    Partition functions arise in a variety of settings, including conditional random fields, logistic regression, and latent gaussian models. In this paper, we consider semistochastic quadratic bound (SQB) methods for maximum likelihood inference based on partition function optimization. Batch methods based on the quadratic bound were recently proposed for this class of problems, and performed favorably in comparison to state-of-the-art techniques. Semistochastic methods fall in between batch algorithms, which use all the data, and stochastic gradient type methods, which use small random selections at each iteration. We build semistochastic quadratic bound-based methods, and prove both global convergence (to a stationary point) under very weak assumptions, and linear convergence rate under stronger assumptions on the objective. To make the proposed methods faster and more stable, we consider inexact subproblem minimization and batch-size selection schemes. The efficacy of SQB methods is demonstrated via comparison with several state-of-the-art techniques on commonly used datasets.Comment: 11 pages, 1 figur
    • …
    corecore