10 research outputs found

    Polynomial Root Radius Optimization with Affine Constraints

    No full text

    A universally optimal multistage accelerated stochastic gradient method

    No full text
    © 2019 Neural information processing systems foundation. All rights reserved. We study the problem of minimizing a strongly convex, smooth function when we have noisy estimates of its gradient. We propose a novel multistage accelerated algorithm that is universally optimal in the sense that it achieves the optimal rate both in the deterministic and stochastic case and operates without knowledge of noise characteristics. The algorithm consists of stages that use a stochastic version of Nesterov's method with a specific restart and parameters selected to achieve the fastest reduction in the bias-variance terms in the convergence rate bounds

    Why random reshuffling beats stochastic gradient descent

    No full text
    Abstract We analyze the convergence rate of the random reshuffling (RR) method, which is a randomized first-order incremental algorithm for minimizing a finite sum of convex component functions. RR proceeds in cycles, picking a uniformly random order (permutation) and processing the component functions one at a time according to this order, i.e., at each cycle, each component function is sampled without replacement from the collection. Though RR has been numerically observed to outperform its with-replacement counterpart stochastic gradient descent (SGD), characterization of its convergence rate has been a long standing open question. In this paper, we answer this question by providing various convergence rate results for RR and variants when the sum function is strongly convex. We first focus on quadratic component functions and show that the expected distance of the iterates generated by RR with stepsize αk=Θ(1/ks)\alpha _k=\varTheta (1/k^s) α k = Θ ( 1 / k s ) for s(0,1]s\in (0,1] s ∈ ( 0 , 1 ] converges to zero at rate O(1/ks)\mathcal{O}(1/k^s) O ( 1 / k s ) (with s=1s=1 s = 1 requiring adjusting the stepsize to the strong convexity constant). Our main result shows that when the component functions are quadratics or smooth (with a Lipschitz assumption on the Hessian matrices), RR with iterate averaging and a diminishing stepsize αk=Θ(1/ks)\alpha _k=\varTheta (1/k^s) α k = Θ ( 1 / k s ) for s(1/2,1)s\in (1/2,1) s ∈ ( 1 / 2 , 1 ) converges at rate Θ(1/k2s)\varTheta (1/k^{2s}) Θ ( 1 / k 2 s ) with probability one in the suboptimality of the objective value, thus improving upon the Ω(1/k)\varOmega (1/k) Ω ( 1 / k ) rate of SGD. Our analysis draws on the theory of Polyak–Ruppert averaging and relies on decoupling the dependent cycle gradient error into an independent term over cycles and another term dominated by αk2\alpha _k^2 α k 2 . This allows us to apply law of large numbers to an appropriately weighted version of the cycle gradient errors, where the weights depend on the stepsize. We also provide high probability convergence rate estimates that shows decay rate of different terms and allows us to propose a modification of RR with convergence rate O(1k2)\mathcal{O}(\frac{1}{k^2}) O ( 1 k 2 )
    corecore