802 research outputs found
Nonlinear programming without a penalty function or a filter
A new method is introduced for solving equality constrained nonlinear optimization problems. This method does not use a penalty function, nor a barrier or a filter, and yet can be proved to be globally convergent to first-order stationary points. It uses different trust-regions to cope with the nonlinearities of the objective function and the constraints, and allows inexact SQP steps that do not lie exactly in the nullspace of the local Jacobian. Preliminary numerical experiments on CUTEr problems indicate that the method performs well
Adaptive Regularization for Nonconvex Optimization Using Inexact Function Values and Randomly Perturbed Derivatives
A regularization algorithm allowing random noise in derivatives and inexact
function values is proposed for computing approximate local critical points of
any order for smooth unconstrained optimization problems. For an objective
function with Lipschitz continuous -th derivative and given an arbitrary
optimality order , it is shown that this algorithm will, in
expectation, compute such a point in at most
inexact evaluations of and its derivatives whenever , where
is the tolerance for th order accuracy. This bound becomes at
most
inexact evaluations if and all derivatives are Lipschitz continuous.
Moreover these bounds are sharp in the order of the accuracy tolerances. An
extension to convexly constrained problems is also outlined.Comment: 22 page
- β¦