10 research outputs found

    Complexity of partially separable convexly constrained optimization with non-Lipschitzian singularities

    Get PDF
    202110 bcwhVersion of RecordRGCPolyU153000/15pPublishe

    Strong Evaluation Complexity Bounds for Arbitrary-Order Optimization of Nonconvex Nonsmooth Composite Functions

    Get PDF
    We introduce the concept of strong high-order approximate minimizers for nonconvex optimization problems. These apply in both standard smooth and composite non-smooth settings, and additionally allow convex or inexpensive constraints. An adaptive regularization algorithm is then proposed to find such approximate minimizers. Under suitable Lipschitz continuity assumptions, whenever the feasible set is convex, it is shown that using a model of degree pp, this algorithm will find a strong approximate q-th-order minimizer in at most O(max⁑1≀j≀qΟ΅jβˆ’(p+1)/(pβˆ’j+1)){\cal O}\left(\max_{1\leq j\leq q}\epsilon_j^{-(p+1)/(p-j+1)}\right) evaluations of the problem's functions and their derivatives, where Ο΅j\epsilon_j is the jj-th order accuracy tolerance; this bound applies when either q=1q=1 or the problem is not composite with q≀2q \leq 2. For general non-composite problems, even when the feasible set is nonconvex, the bound becomes O(max⁑1≀j≀qΟ΅jβˆ’q(p+1)/p){\cal O}\left(\max_{1\leq j\leq q}\epsilon_j^{-q(p+1)/p}\right) evaluations. If the problem is composite, and either q>1q > 1 or the feasible set is not convex, the bound is then O(max⁑1≀j≀qΟ΅jβˆ’(q+1)){\cal O}\left(\max_{1\leq j\leq q}\epsilon_j^{-(q+1)}\right) evaluations. These results not only provide, to our knowledge, the first known bound for (unconstrained or inexpensively-constrained) composite problems for optimality orders exceeding one, but also give the first sharp bounds for high-order strong approximate qq-th order minimizers of standard (unconstrained and inexpensively constrained) smooth problems, thereby complementing known results for weak minimizers.Comment: 32 pages, 1 figur

    Adaptive Regularization Minimization Algorithms with Non-Smooth Norms and Euclidean Curvature

    Get PDF
    A regularization algorithm (AR1pGN) for unconstrained nonlinear minimization is considered, which uses a model consisting of a Taylor expansion of arbitrary degree and regularization term involving a possibly non-smooth norm. It is shown that the non-smoothness of the norm does not affect the O(Ο΅1βˆ’(p+1)/p)O(\epsilon_1^{-(p+1)/p}) upper bound on evaluation complexity for finding first-order Ο΅1\epsilon_1-approximate minimizers using pp derivatives, and that this result does not hinge on the equivalence of norms in β„œn\Re^n. It is also shown that, if p=2p=2, the bound of O(Ο΅2βˆ’3)O(\epsilon_2^{-3}) evaluations for finding second-order Ο΅2\epsilon_2-approximate minimizers still holds for a variant of AR1pGN named AR2GN, despite the possibly non-smooth nature of the regularization term. Moreover, the adaptation of the existing theory for handling the non-smoothness results in an interesting modification of the subproblem termination rules, leading to an even more compact complexity analysis. In particular, it is shown when the Newton's step is acceptable for an adaptive regularization method. The approximate minimization of quadratic polynomials regularized with non-smooth norms is then discussed, and a new approximate second-order necessary optimality condition is derived for this case. An specialized algorithm is then proposed to enforce the first- and second-order conditions that are strong enough to ensure the existence of a suitable step in AR1pGN (when p=2p=2) and in AR2GN, and its iteration complexity is analyzed.Comment: A correction will be available soo
    corecore