2,091 research outputs found

    A comprehensive view on optimization: reasonable descent

    Get PDF
    Reasonable descent is a novel, transparent approach to a well-established field: the deep methods and applications of the complete analysis of continuous optimization problems. Standard reasonable descents give a unified approach to all standard necessary conditions, including the Lagrange multiplier rule, the Karush-Kuhn-Tucker conditions and the second order conditions. Nonstandard reasonable descents lead to new necessary conditions. These can be used to give surprising proofs of deep central results outside optimization: the fundamental theorem of algebra, the maximum and the minimum principle of complex function theory, the separation theorems for convex sets, the orthogonal diagonalization of symmetric matrices and the implicit function theorem. These optimization proofs compare favorably with the usual proofs and are all based on the same strategy. This paper is addressed to all practitioners of optimization methods from many fields who are interested in fully understanding the foundations of these methods and of the central results above.optimization;fundamental theorem of algebra;Lagrange multiplier;Karush-Kuhn-Tucker;descent;implicit function theorem;necessary conditions;orthogonal diagonalization

    Optimality conditions and duality for nondifferentiable multiobjective programming problems involving d-r-type I functions

    Get PDF
    AbstractIn this paper, new classes of nondifferentiable functions constituting multiobjective programming problems are introduced. Namely, the classes of d-r-type I objective and constraint functions and, moreover, the various classes of generalized d-r-type I objective and constraint functions are defined for directionally differentiable multiobjective programming problems. Sufficient optimality conditions and various Mond–Weir duality results are proved for nondifferentiable multiobjective programming problems involving functions of such type. Finally, it is showed that the introduced d-r-type I notion with r≠0 is not a sufficient condition for Wolfe weak duality to hold. These results are illustrated in the paper by suitable examples

    Multiplier-continuation algorthms for constrained optimization

    Get PDF
    Several path following algorithms based on the combination of three smooth penalty functions, the quadratic penalty for equality constraints and the quadratic loss and log barrier for inequality constraints, their modern counterparts, augmented Lagrangian or multiplier methods, sequential quadratic programming, and predictor-corrector continuation are described. In the first phase of this methodology, one minimizes the unconstrained or linearly constrained penalty function or augmented Lagrangian. A homotopy path generated from the functions is then followed to optimality using efficient predictor-corrector continuation methods. The continuation steps are asymptotic to those taken by sequential quadratic programming which can be used in the final steps. Numerical test results show the method to be efficient, robust, and a competitive alternative to sequential quadratic programming

    Continuum Equilibria and Global Optimization for Routing in Dense Static Ad Hoc Networks

    Full text link
    We consider massively dense ad hoc networks and study their continuum limits as the node density increases and as the graph providing the available routes becomes a continuous area with location and congestion dependent costs. We study both the global optimal solution as well as the non-cooperative routing problem among a large population of users where each user seeks a path from its origin to its destination so as to minimize its individual cost. Finally, we seek for a (continuum version of the) Wardrop equilibrium. We first show how to derive meaningful cost models as a function of the scaling properties of the capacity of the network and of the density of nodes. We present various solution methodologies for the problem: (1) the viscosity solution of the Hamilton-Jacobi-Bellman equation, for the global optimization problem, (2) a method based on Green's Theorem for the least cost problem of an individual, and (3) a solution of the Wardrop equilibrium problem using a transformation into an equivalent global optimization problem
    • 

    corecore