4,512 research outputs found

    Globally convergent algorithms for solving unconstrained optimization problems

    Get PDF
    New algorithms for solving unconstrained optimization problems are presented based on the idea of combining two types of descent directions: the direction of anti-gradient and either the Newton or quasi-Newton directions. The use of latter directions allows one to improve the convergence rate. Global and superlinear convergence properties of these algorithms are established. Numerical experiments using some unconstrained test problems are reported. Also, the proposed algorithms are compared with some existing similar methods using results of experiments. This comparison demonstrates the efficiency of the proposed combined methods

    Modern Homotopy Methods in Optimization

    Get PDF
    Probability-one homotopy methods are a class of algorithms for solving nonlinear systems of equations that are accurate, robust, and converge from an arbitrary starting point almost surely. These new techniques have been successfully applied to solve Brouwer faced point problems, polynomial systems of equations, and discretizations of nonlinear two-point boundary value problems based on shooting, finite differences, collocation, and finite elements. This paper summarizes the theory of globally convergent homotopy algorithms for unconstrained and constrained optimization, and gives some examples of actual application of homotopy techniques to engineering optimization problems

    A Survey of Probability-One Homotopy Methods for Engineering Optimization

    Get PDF
    Probability-one homotopy methods are a class of algorithms for solving nonlinear systems of equations that are accurate, robust, and converge from an arbitrary starting point almost surely. These globally convergent homotopy techniques have been successfully applied to solve Brouwer fixed point problems, polynomial systems of equations, discretizations on nonlinear two-point boundary value problems based on shooting, finite differences, collocation, and finite elements, and Galerkin approximations to nonlinear partial differential equations. This paper surveys the basic theory of globally convergent probability-one homotopy algorithms relevant to optimization, describes some computer algorithms and mathematical software, and applies homotopy theory to unconstrained optimization, constrained optimization, and global optimization of polynomial programs. In addition, two realistic engineering applications (optimal design of composite laminated plates and fuel-optimal orbital satellite maneuvers) are presented

    Globally Convergent Homotopy Algorithms for Nonlinear Systems of Equations

    Get PDF
    Probability-one homotopy methods are a class of algorithms for solving nonlinear systems of equations that are accurate, robust, and converge from an arbitrary starting point almost surely. These new globally convergent homotopy techniques have been successfully applied to solve Brouwer fixed point problems, polynomial systems of equations, constrained and unconstrained optimization problems, discretizations of nonlinear two-point boundary value problems based on shooting, finite differences, collocation, and finite elements, and finite difference, collocation, and Galerkin approximations to nonlinear partial differential equations. This paper introduces, in a tutorial fashion, the theory of globally convergent homotopy algorithms, describes some computer algorithms and mathematical software, and presents several nontrivial engineering applications

    A Primal-Dual Augmented Lagrangian

    Get PDF
    Nonlinearly constrained optimization problems can be solved by minimizing a sequence of simpler unconstrained or linearly constrained subproblems. In this paper, we discuss the formulation of subproblems in which the objective is a primal-dual generalization of the Hestenes-Powell augmented Lagrangian function. This generalization has the crucial feature that it is minimized with respect to both the primal and the dual variables simultaneously. A benefit of this approach is that the quality of the dual variables is monitored explicitly during the solution of the subproblem. Moreover, each subproblem may be regularized by imposing explicit bounds on the dual variables. Two primal-dual variants of conventional primal methods are proposed: a primal-dual bound constrained Lagrangian (pdBCL) method and a primal-dual β„“\ell1 linearly constrained Lagrangian (pdβ„“\ell1-LCL) method
    • …
    corecore