498 research outputs found

    Convex optimization using quantum oracles

    Get PDF

    Stochastic Subgradient Algorithms for Strongly Convex Optimization over Distributed Networks

    Full text link
    We study diffusion and consensus based optimization of a sum of unknown convex objective functions over distributed networks. The only access to these functions is through stochastic gradient oracles, each of which is only available at a different node, and a limited number of gradient oracle calls is allowed at each node. In this framework, we introduce a convex optimization algorithm based on the stochastic gradient descent (SGD) updates. Particularly, we use a carefully designed time-dependent weighted averaging of the SGD iterates, which yields a convergence rate of O(NNT)O\left(\frac{N\sqrt{N}}{T}\right) after TT gradient updates for each node on a network of NN nodes. We then show that after TT gradient oracle calls, the average SGD iterate achieves a mean square deviation (MSD) of O(NT)O\left(\frac{\sqrt{N}}{T}\right). This rate of convergence is optimal as it matches the performance lower bound up to constant terms. Similar to the SGD algorithm, the computational complexity of the proposed algorithm also scales linearly with the dimensionality of the data. Furthermore, the communication load of the proposed method is the same as the communication load of the SGD algorithm. Thus, the proposed algorithm is highly efficient in terms of complexity and communication load. We illustrate the merits of the algorithm with respect to the state-of-art methods over benchmark real life data sets and widely studied network topologies

    Quantum SDP-Solvers: Better upper and lower bounds

    Get PDF
    Brand\~ao and Svore very recently gave quantum algorithms for approximately solving semidefinite programs, which in some regimes are faster than the best-possible classical algorithms in terms of the dimension nn of the problem and the number mm of constraints, but worse in terms of various other parameters. In this paper we improve their algorithms in several ways, getting better dependence on those other parameters. To this end we develop new techniques for quantum algorithms, for instance a general way to efficiently implement smooth functions of sparse Hamiltonians, and a generalized minimum-finding procedure. We also show limits on this approach to quantum SDP-solvers, for instance for combinatorial optimizations problems that have a lot of symmetry. Finally, we prove some general lower bounds showing that in the worst case, the complexity of every quantum LP-solver (and hence also SDP-solver) has to scale linearly with mnmn when m≈nm\approx n, which is the same as classical.Comment: v4: 69 pages, small corrections and clarifications. This version will appear in Quantu

    Information-based complexity, feedback and dynamics in convex programming

    Get PDF
    We study the intrinsic limitations of sequential convex optimization through the lens of feedback information theory. In the oracle model of optimization, an algorithm queries an {\em oracle} for noisy information about the unknown objective function, and the goal is to (approximately) minimize every function in a given class using as few queries as possible. We show that, in order for a function to be optimized, the algorithm must be able to accumulate enough information about the objective. This, in turn, puts limits on the speed of optimization under specific assumptions on the oracle and the type of feedback. Our techniques are akin to the ones used in statistical literature to obtain minimax lower bounds on the risks of estimation procedures; the notable difference is that, unlike in the case of i.i.d. data, a sequential optimization algorithm can gather observations in a {\em controlled} manner, so that the amount of information at each step is allowed to change in time. In particular, we show that optimization algorithms often obey the law of diminishing returns: the signal-to-noise ratio drops as the optimization algorithm approaches the optimum. To underscore the generality of the tools, we use our approach to derive fundamental lower bounds for a certain active learning problem. Overall, the present work connects the intuitive notions of information in optimization, experimental design, estimation, and active learning to the quantitative notion of Shannon information.Comment: final version; to appear in IEEE Transactions on Information Theor

    Von Neumann Entropy Penalization and Low Rank Matrix Estimation

    Get PDF
    A problem of statistical estimation of a Hermitian nonnegatively definite matrix of unit trace (for instance, a density matrix in quantum state tomography) is studied. The approach is based on penalized least squares method with a complexity penalty defined in terms of von Neumann entropy. A number of oracle inequalities have been proved showing how the error of the estimator depends on the rank and other characteristics of the oracles. The methods of proofs are based on empirical processes theory and probabilistic inequalities for random matrices, in particular, noncommutative versions of Bernstein inequality

    No Quantum Speedup over Gradient Descent for Non-Smooth Convex Optimization

    Get PDF
    We study the first-order convex optimization problem, where we have black-box access to a (not necessarily smooth) function f:Rn→Rf:\mathbb{R}^n \to \mathbb{R} and its (sub)gradient. Our goal is to find an ϵ\epsilon-approximate minimum of ff starting from a point that is distance at most RR from the true minimum. If ff is GG-Lipschitz, then the classic gradient descent algorithm solves this problem with O((GR/ϵ)2)O((GR/\epsilon)^{2}) queries. Importantly, the number of queries is independent of the dimension nn and gradient descent is optimal in this regard: No deterministic or randomized algorithm can achieve better complexity that is still independent of the dimension nn. In this paper we reprove the randomized lower bound of Ω((GR/ϵ)2)\Omega((GR/\epsilon)^{2}) using a simpler argument than previous lower bounds. We then show that although the function family used in the lower bound is hard for randomized algorithms, it can be solved using O(GR/ϵ)O(GR/\epsilon) quantum queries. We then show an improved lower bound against quantum algorithms using a different set of instances and establish our main result that in general even quantum algorithms need Ω((GR/ϵ)2)\Omega((GR/\epsilon)^2) queries to solve the problem. Hence there is no quantum speedup over gradient descent for black-box first-order convex optimization without further assumptions on the function family.Comment: 25 page
    • …
    corecore