801 research outputs found

    Nonlinear programming without a penalty function or a filter

    Get PDF
    A new method is introduced for solving equality constrained nonlinear optimization problems. This method does not use a penalty function, nor a barrier or a filter, and yet can be proved to be globally convergent to first-order stationary points. It uses different trust-regions to cope with the nonlinearities of the objective function and the constraints, and allows inexact SQP steps that do not lie exactly in the nullspace of the local Jacobian. Preliminary numerical experiments on CUTEr problems indicate that the method performs well

    Do you hear that hand

    Get PDF

    Adaptive Regularization for Nonconvex Optimization Using Inexact Function Values and Randomly Perturbed Derivatives

    Get PDF
    A regularization algorithm allowing random noise in derivatives and inexact function values is proposed for computing approximate local critical points of any order for smooth unconstrained optimization problems. For an objective function with Lipschitz continuous pp-th derivative and given an arbitrary optimality order q≀pq \leq p, it is shown that this algorithm will, in expectation, compute such a point in at most O((min⁑j∈{1,…,q}Ο΅j)βˆ’p+1pβˆ’q+1)O\left(\left(\min_{j\in\{1,\ldots,q\}}\epsilon_j\right)^{-\frac{p+1}{p-q+1}}\right) inexact evaluations of ff and its derivatives whenever q∈{1,2}q\in\{1,2\}, where Ο΅j\epsilon_j is the tolerance for jjth order accuracy. This bound becomes at most O((min⁑j∈{1,…,q}Ο΅j)βˆ’q(p+1)p)O\left(\left(\min_{j\in\{1,\ldots,q\}}\epsilon_j\right)^{-\frac{q(p+1)}{p}}\right) inexact evaluations if q>2q>2 and all derivatives are Lipschitz continuous. Moreover these bounds are sharp in the order of the accuracy tolerances. An extension to convexly constrained problems is also outlined.Comment: 22 page
    • …
    corecore