7 research outputs found

    Inexact Newton-Type Optimization with Iterated Sensitivities

    Get PDF
    This paper presents and analyzes an Inexact Newton-type optimization method based on Iterated Sensitivities (INIS). A particular class of Nonlinear Programming (NLP) problems is considered, where a subset of the variables is defined by nonlinear equality constraints. The proposed algorithm considers any problem-specific approximation for the Jacobian of these constraints. Unlike other inexact Newton methods, the INIS-type optimization algorithm is shown to preserve the local convergence properties and the asymptotic contraction rate of the Newton-type scheme for the feasibility problem, yielded by the same Jacobian approximation. The INIS approach results in a computational cost which can be made close to that of the standard inexact Newton implementation. In addition, an adjoint-free (AF-INIS) variant of the approach is presented which, under certain conditions, becomes considerably easier to implement than the adjoint based scheme. The applicability of these results is motivated, specifically for dynamic optimization problems. In addition, the numerical performance of a specific open-source implementation is illustrated

    A Stochastic Sequential Quadratic Optimization Algorithm for Nonlinear Equality Constrained Optimization with Rank-Deficient Jacobians

    Full text link
    A sequential quadratic optimization algorithm is proposed for solving smooth nonlinear equality constrained optimization problems in which the objective function is defined by an expectation of a stochastic function. The algorithmic structure of the proposed method is based on a step decomposition strategy that is known in the literature to be widely effective in practice, wherein each search direction is computed as the sum of a normal step (toward linearized feasibility) and a tangential step (toward objective decrease in the null space of the constraint Jacobian). However, the proposed method is unique from others in the literature in that it both allows the use of stochastic objective gradient estimates and possesses convergence guarantees even in the setting in which the constraint Jacobians may be rank deficient. The results of numerical experiments demonstrate that the algorithm offers superior performance when compared to popular alternatives

    Efficient Trust Region Methods for Nonconvex Optimization

    Get PDF
    For decades, a great deal of nonlinear optimization research has focused on modeling and solving convex problems. This has been due to the fact that convex objects typically represent satisfactory estimates of real-world phenomenon, and since convex objects have very nice mathematical properties that makes analyses of them relatively straightforward. However, this focus has been changing. In various important applications, such as large-scale data fitting and learning problems, researchers are starting to turn away from simple, convex models toward more challenging nonconvex models that better represent real-world behaviors and can offer more useful solutions.To contribute to this new focus on nonconvex optimization models, we discuss and present new techniques for solving nonconvex optimization problems that possess attractive theoretical and practical properties. First, we propose a trust region algorithm that, in the worst case, is able to drive the norm of the gradient of the objective function below a prescribed threshold of ϵ∈(0,∞)\epsilon \in (0,\infty) after at most O(ϵ−3/2)\mathcal{O}(\epsilon^{-3/2}) iterations, function evaluations, and derivative evaluations. This improves upon the O(ϵ−2)\mathcal{O}(\epsilon^{-2}) bound known to hold for some other trust region algorithms and matches the O(ϵ−3/2)\mathcal{O}(\epsilon^{-3/2}) bound for the recently proposed Adaptive Regularisation framework using Cubics, also known as the ARC algorithm. Our algorithm, entitled TRACE, follows a trust region framework, but employs modified step acceptance criteria and a novel trust region update mechanism that allow the algorithm to achieve such a worst-case global complexity bound. Importantly, we prove that our algorithm also attains global and fast local convergence guarantees under similar assumptions as for other trust region algorithms. We also prove a worst-case upper bound on the number of iterations the algorithm requires to obtain an approximate second-order stationary point.The aforementioned algorithm is based on techniques that require an exact subproblem solution in every iteration. This is a reasonable assumption for small- to medium-scale problems, but is intractable for large-scale optimization. To address this issue, the second project of this thesis involves a proposal of a general \emph{inexact} framework, which contains a wide range of algorithms with optimal complexity bounds, through defining a novel primal-dual subproblem and a set of loose conditions for an inexact solution of it. The proposed framework enjoys the same worst-case iteration complexity bounds for locating approximate first- and second-order stationary points as \RACE. However, it does not require one to solve subproblems exactly. In addition, the framework allows one to use inexact Newton steps whenever possible, a feature which allows the algorithm to use Hessian matrix-free approaches such as the \emph{conjugate gradient} method. This improves the practical performance of the algorithm, as our numerical experiments show.We close by proposing a globally convergent trust funnel algorithm for equality constrained optimization. The proposed algorithm, under some standard assumptions, is able to find a relative first-order stationary point after at most O(ϵ−3/2)\mathcal{O}(\epsilon^{-3/2}) iterations. This matches the complexity bound of the recently proposed Short-Step ARC algorithm. Our proposed algorithm uses the step decomposition and feasibility control mechanism of a trust funnel algorithm, but incorporates ideas from our TRACE framework in order to achieve good complexity bounds
    corecore