2 research outputs found

    Improved second-order evaluation complexity for unconstrained nonlinear optimization using high-order regularized models

    No full text
    The unconstrained minimization of a sufficiently smooth objective function f(x)f(x) is considered, for which derivatives up to order pp, pβ‰₯2p\geq 2, are assumed to be available. An adaptive regularization algorithm is proposed that uses Taylor models of the objective of order pp and that is guaranteed to find a first- and second-order critical point in at most O(max⁑(Ο΅1βˆ’p+1p,Ο΅2βˆ’p+1pβˆ’1))O \left(\max\left( \epsilon_1^{-\frac{p+1}{p}}, \epsilon_2^{-\frac{p+1}{p-1}} \right) \right) function and derivatives evaluations, where Ο΅1\epsilon_1 and Ο΅2>0\epsilon_2 >0 are prescribed first- and second-order optimality tolerances. Our approach extends the method in Birgin et al. (2016) to finding second-order critical points, and establishes the novel complexity bound for second-order criticality under identical problem assumptions as for first-order, namely, that the pp-th derivative tensor is Lipschitz continuous and that f(x)f(x) is bounded from below. The evaluation-complexity bound for second-order criticality improves on all such known existing results
    corecore