9 research outputs found
Complexity of partially separable convexly constrained optimization with non-Lipschitzian singularities
202110 bcwhVersion of RecordRGCPolyU153000/15pPublishe
Strong Evaluation Complexity Bounds for Arbitrary-Order Optimization of Nonconvex Nonsmooth Composite Functions
We introduce the concept of strong high-order approximate minimizers for
nonconvex optimization problems. These apply in both standard smooth and
composite non-smooth settings, and additionally allow convex or inexpensive
constraints. An adaptive regularization algorithm is then proposed to find such
approximate minimizers. Under suitable Lipschitz continuity assumptions,
whenever the feasible set is convex, it is shown that using a model of degree
, this algorithm will find a strong approximate q-th-order minimizer in at
most
evaluations of the problem's functions and their derivatives, where
is the -th order accuracy tolerance; this bound applies when
either or the problem is not composite with . For general
non-composite problems, even when the feasible set is nonconvex, the bound
becomes
evaluations. If the problem is composite, and either or the feasible
set is not convex, the bound is then evaluations. These results not only provide, to
our knowledge, the first known bound for (unconstrained or
inexpensively-constrained) composite problems for optimality orders exceeding
one, but also give the first sharp bounds for high-order strong approximate
-th order minimizers of standard (unconstrained and inexpensively
constrained) smooth problems, thereby complementing known results for weak
minimizers.Comment: 32 pages, 1 figur
Adaptive Regularization Minimization Algorithms with Non-Smooth Norms and Euclidean Curvature
A regularization algorithm (AR1pGN) for unconstrained nonlinear minimization
is considered, which uses a model consisting of a Taylor expansion of arbitrary
degree and regularization term involving a possibly non-smooth norm. It is
shown that the non-smoothness of the norm does not affect the
upper bound on evaluation complexity for finding
first-order -approximate minimizers using derivatives, and that
this result does not hinge on the equivalence of norms in . It is also
shown that, if , the bound of evaluations for finding
second-order -approximate minimizers still holds for a variant of
AR1pGN named AR2GN, despite the possibly non-smooth nature of the
regularization term. Moreover, the adaptation of the existing theory for
handling the non-smoothness results in an interesting modification of the
subproblem termination rules, leading to an even more compact complexity
analysis. In particular, it is shown when the Newton's step is acceptable for
an adaptive regularization method. The approximate minimization of quadratic
polynomials regularized with non-smooth norms is then discussed, and a new
approximate second-order necessary optimality condition is derived for this
case. An specialized algorithm is then proposed to enforce the first- and
second-order conditions that are strong enough to ensure the existence of a
suitable step in AR1pGN (when ) and in AR2GN, and its iteration complexity
is analyzed.Comment: A correction will be available soo