Gradient algorithms for quadratic optimization with fast convergence rates. (English summary) Comput. Optim. Appl. 50 (2011), no. 3, 597–617.1573-2894 Summary: “We propose a family of gradient algorithms for minimizing a quadratic function f(x) = (Ax, x)/2 − (x, y) in R d or a Hilbert space, with simple rules for choosing the step-size at each iteration. We show that when the step-sizes are generated by a dynamical system with ergodic distribution having the arcsine density on a subinterval of the spectrum of A, the asymptotic rate of convergence of the algorithm can approach the (tight) bound on the rate of convergence of a conjugate gradient algorithm stopped before d iterations, with d ≤ ∞ the space dimension.” References 1. Akaike, H.: On a successive transformation of probability distribution and its application to the analysis of the optimum gradient method. Ann. Inst. Stat. Math. Tokyo 11, 1–16 (1959) MR0107973 (21 #6694
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.