1,969 research outputs found

    Adaptive Regularization for Nonconvex Optimization Using Inexact Function Values and Randomly Perturbed Derivatives

    Get PDF
    A regularization algorithm allowing random noise in derivatives and inexact function values is proposed for computing approximate local critical points of any order for smooth unconstrained optimization problems. For an objective function with Lipschitz continuous pp-th derivative and given an arbitrary optimality order q≀pq \leq p, it is shown that this algorithm will, in expectation, compute such a point in at most O((min⁑j∈{1,…,q}Ο΅j)βˆ’p+1pβˆ’q+1)O\left(\left(\min_{j\in\{1,\ldots,q\}}\epsilon_j\right)^{-\frac{p+1}{p-q+1}}\right) inexact evaluations of ff and its derivatives whenever q∈{1,2}q\in\{1,2\}, where Ο΅j\epsilon_j is the tolerance for jjth order accuracy. This bound becomes at most O((min⁑j∈{1,…,q}Ο΅j)βˆ’q(p+1)p)O\left(\left(\min_{j\in\{1,\ldots,q\}}\epsilon_j\right)^{-\frac{q(p+1)}{p}}\right) inexact evaluations if q>2q>2 and all derivatives are Lipschitz continuous. Moreover these bounds are sharp in the order of the accuracy tolerances. An extension to convexly constrained problems is also outlined.Comment: 22 page
    • …
    corecore