595,039 research outputs found

    Geometric approach to Fletcher's ideal penalty function

    Get PDF
    Original article can be found at: www.springerlink.com Copyright Springer. [Originally produced as UH Technical Report 280, 1993]In this note, we derive a geometric formulation of an ideal penalty function for equality constrained problems. This differentiable penalty function requires no parameter estimation or adjustment, has numerical conditioning similar to that of the target function from which it is constructed, and also has the desirable property that the strict second-order constrained minima of the target function are precisely those strict second-order unconstrained minima of the penalty function which satisfy the constraints. Such a penalty function can be used to establish termination properties for algorithms which avoid ill-conditioned steps. Numerical values for the penalty function and its derivatives can be calculated efficiently using automatic differentiation techniques.Peer reviewe

    A Unified Approach to the Global Exactness of Penalty and Augmented Lagrangian Functions I: Parametric Exactness

    Full text link
    In this two-part study we develop a unified approach to the analysis of the global exactness of various penalty and augmented Lagrangian functions for finite-dimensional constrained optimization problems. This approach allows one to verify in a simple and straightforward manner whether a given penalty/augmented Lagrangian function is exact, i.e. whether the problem of unconstrained minimization of this function is equivalent (in some sense) to the original constrained problem, provided the penalty parameter is sufficiently large. Our approach is based on the so-called localization principle that reduces the study of global exactness to a local analysis of a chosen merit function near globally optimal solutions. In turn, such local analysis can usually be performed with the use of sufficient optimality conditions and constraint qualifications. In the first paper we introduce the concept of global parametric exactness and derive the localization principle in the parametric form. With the use of this version of the localization principle we recover existing simple necessary and sufficient conditions for the global exactness of linear penalty functions, and for the existence of augmented Lagrange multipliers of Rockafellar-Wets' augmented Lagrangian. Also, we obtain completely new necessary and sufficient conditions for the global exactness of general nonlinear penalty functions, and for the global exactness of a continuously differentiable penalty function for nonlinear second-order cone programming problems. We briefly discuss how one can construct a continuously differentiable exact penalty function for nonlinear semidefinite programming problems, as well.Comment: 34 pages. arXiv admin note: text overlap with arXiv:1710.0196

    A unifying theory of exactness of linear penalty functions II: parametric penalty functions

    Full text link
    In this article we develop a general theory of exact parametric penalty functions for constrained optimization problems. The main advantage of the method of parametric penalty functions is the fact that a parametric penalty function can be both smooth and exact unlike the standard (i.e. non-parametric) exact penalty functions that are always nonsmooth. We obtain several necessary and/or sufficient conditions for the exactness of parametric penalty functions, and for the zero duality gap property to hold true for these functions. We also prove some convergence results for the method of parametric penalty functions, and derive necessary and sufficient conditions for a parametric penalty function to not have any stationary points outside the set of feasible points of the constrained optimization problem under consideration. In the second part of the paper, we apply the general theory of exact parametric penalty functions to a class of parametric penalty functions introduced by Huyer and Neumaier, and to smoothing approximations of nonsmooth exact penalty functions. The general approach adopted in this article allowed us to unify and significantly sharpen many existing results on parametric penalty functions.Comment: This is a slightly edited version of Accepted Manuscript of an article published by Taylor & Francis in Optimization on 06/07/201

    Multidimensional optimization algorithms numerical results

    Get PDF
    This paper presents some multidimensional optimization algorithms. By using the "penalty function" method, these algorithms are used to solving an entire class of economic optimization problems. Comparative numerical results of certain new multidimensional optimization algorithms for solving some test problems known on literature are shown.optimization algorithm, multidimensional optimization, penalty function
    corecore