503 research outputs found
Acceleration Methods
This monograph covers some recent advances in a range of acceleration
techniques frequently used in convex optimization. We first use quadratic
optimization problems to introduce two key families of methods, namely momentum
and nested optimization schemes. They coincide in the quadratic case to form
the Chebyshev method. We discuss momentum methods in detail, starting with the
seminal work of Nesterov and structure convergence proofs using a few master
templates, such as that for optimized gradient methods, which provide the key
benefit of showing how momentum methods optimize convergence guarantees. We
further cover proximal acceleration, at the heart of the Catalyst and
Accelerated Hybrid Proximal Extragradient frameworks, using similar algorithmic
patterns. Common acceleration techniques rely directly on the knowledge of some
of the regularity parameters in the problem at hand. We conclude by discussing
restart schemes, a set of simple techniques for reaching nearly optimal
convergence rates while adapting to unobserved regularity parameters.Comment: Published in Foundation and Trends in Optimization (see
https://www.nowpublishers.com/article/Details/OPT-036
dCATCH—A Numerical Package for d-Variate near G-Optimal Tchakaloff Regression via Fast NNLS
We provide a numerical package for the computation of a d-variate near G-optimal polynomial regression design of degree m on a finite design space X ⊂ R d , by few iterations of a basic multiplicative algorithm followed by Tchakaloff-like compression of the discrete measure keeping the reached G-efficiency, via an accelerated version of the Lawson-Hanson algorithm for Non-Negative Least Squares (NNLS) problems. This package can solve on a personal computer large-scale problems where c a r d ( X ) × dim ( P 2 m d ) is up to 10 8 – 10 9 , being dim ( P 2 m d ) = 2 m + d d = 2 m + d 2 m . Several numerical tests are presented on complex shapes in d = 3 and on hypercubes in d > 3
- …