1,223 research outputs found
Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
The adaptive cubic regularization algorithms described in Cartis, Gould and Toint [Adaptive cubic regularisation methods for unconstrained optimization Part II: Worst-case function- and derivative-evaluation complexity, Math. Program. (2010), doi:10.1007/s10107-009-0337-y (online)]; [Part I: Motivation, convergence and numerical results, Math. Program. 127(2) (2011), pp. 245-295] for unconstrained (nonconvex) optimization are shown to have improved worst-case efficiency in terms of the function- and gradient-evaluation count when applied to convex and strongly convex objectives. In particular, our complexity upper bounds match in order (as a function of the accuracy of approximation), and sometimes even improve, those obtained by Nesterov [Introductory Lectures on Convex Optimization, Kluwer Academic Publishers, Dordrecht, 2004; Accelerating the cubic regularization of Newton's method on convex problems, Math. Program. 112(1) (2008), pp. 159-181] and Nesterov and Polyak [Cubic regularization of Newton's method and its global performance, Math. Program. 108(1) (2006), pp. 177-205] for these same problem classes, without requiring exact Hessians or exact or global solution of the subproblem. An additional outcome of our approximate approach is that our complexity results can naturally capture the advantages of both first- and second-order methods. Ā© 2012 Taylor and Francis
Adaptive Regularization Algorithms with Inexact Evaluations for Nonconvex Optimization
A regularization algorithm using inexact function values and inexact
derivatives is proposed and its evaluation complexity analyzed. This algorithm
is applicable to unconstrained problems and to problems with inexpensive
constraints (that is constraints whose evaluation and enforcement has
negligible cost) under the assumption that the derivative of highest degree is
-H\"{o}lder continuous. It features a very flexible adaptive mechanism
for determining the inexactness which is allowed, at each iteration, when
computing objective function values and derivatives. The complexity analysis
covers arbitrary optimality order and arbitrary degree of available approximate
derivatives. It extends results of Cartis, Gould and Toint (2018) on the
evaluation complexity to the inexact case: if a th order minimizer is sought
using approximations to the first derivatives, it is proved that a suitable
approximate minimizer within is computed by the proposed algorithm
in at most iterations and at most
approximate
evaluations. An algorithmic variant, although more rigid in practice, can be
proved to find such an approximate minimizer in
evaluations.While
the proposed framework remains so far conceptual for high degrees and orders,
it is shown to yield simple and computationally realistic inexact methods when
specialized to the unconstrained and bound-constrained first- and second-order
cases. The deterministic complexity results are finally extended to the
stochastic context, yielding adaptive sample-size rules for subsampling methods
typical of machine learning.Comment: 32 page
On global minimizers of quadratic functions with cubic regularization
In this paper, we analyze some theoretical properties of the problem of
minimizing a quadratic function with a cubic regularization term, arising in
many methods for unconstrained and constrained optimization that have been
proposed in the last years. First we show that, given any stationary point that
is not a global solution, it is possible to compute, in closed form, a new
point with a smaller objective function value. Then, we prove that a global
minimizer can be obtained by computing a finite number of stationary points.
Finally, we extend these results to the case where stationary conditions are
approximately satisfied, discussing some possible algorithmic applications.Comment: Optimization Letters (2018
Newton-Type Methods for Non-Convex Optimization Under Inexact Hessian Information
We consider variants of trust-region and cubic regularization methods for
non-convex optimization, in which the Hessian matrix is approximated. Under
mild conditions on the inexact Hessian, and using approximate solution of the
corresponding sub-problems, we provide iteration complexity to achieve -approximate second-order optimality which have shown to be tight.
Our Hessian approximation conditions constitute a major relaxation over the
existing ones in the literature. Consequently, we are able to show that such
mild conditions allow for the construction of the approximate Hessian through
various random sampling methods. In this light, we consider the canonical
problem of finite-sum minimization, provide appropriate uniform and non-uniform
sub-sampling strategies to construct such Hessian approximations, and obtain
optimal iteration complexity for the corresponding sub-sampled trust-region and
cubic regularization methods.Comment: 32 page
- ā¦