501,096 research outputs found

    Two approaches toward constrained vector optimization and identity of the solutions

    Get PDF
    In this paper we deal with a Fritz John type constrained vector optimization problem. In spite that there are many concepts of solutions for an unconstrained vector optimization problem, we show the possibility “to double” the number of concepts when a constrained problem is considered. In particular we introduce sense I and sense II isolated minimizers, properly efficient points, efficient points and weakly efficient points. As a motivation leading to these concepts we give some results concerning optimality conditions in constrained vector optimization and stability properties of isolated minimizers and properly efficient points. Our main investigation and results concern relations between sense I and sense II concepts. These relations are proved mostly under convexity type conditions. Key words: Constrained vector optimization, Optimality conditions, Stability, Type of solutions and their identity, Vector optimization and convexity type conditions.

    Certificates of infeasibility via nonsmooth optimization

    Get PDF
    An important aspect in the solution process of constraint satisfaction problems is to identify exclusion boxes which are boxes that do not contain feasible points. This paper presents a certificate of infeasibility for finding such boxes by solving a linearly constrained nonsmooth optimization problem. Furthermore, the constructed certificate can be used to enlarge an exclusion box by solving a nonlinearly constrained nonsmooth optimization problem.Comment: arXiv admin note: substantial text overlap with arXiv:1506.0802

    Progressive construction of a parametric reduced-order model for PDE-constrained optimization

    Full text link
    An adaptive approach to using reduced-order models as surrogates in PDE-constrained optimization is introduced that breaks the traditional offline-online framework of model order reduction. A sequence of optimization problems constrained by a given Reduced-Order Model (ROM) is defined with the goal of converging to the solution of a given PDE-constrained optimization problem. For each reduced optimization problem, the constraining ROM is trained from sampling the High-Dimensional Model (HDM) at the solution of some of the previous problems in the sequence. The reduced optimization problems are equipped with a nonlinear trust-region based on a residual error indicator to keep the optimization trajectory in a region of the parameter space where the ROM is accurate. A technique for incorporating sensitivities into a Reduced-Order Basis (ROB) is also presented, along with a methodology for computing sensitivities of the reduced-order model that minimizes the distance to the corresponding HDM sensitivity, in a suitable norm. The proposed reduced optimization framework is applied to subsonic aerodynamic shape optimization and shown to reduce the number of queries to the HDM by a factor of 4-5, compared to the optimization problem solved using only the HDM, with errors in the optimal solution far less than 0.1%

    Augmented Lagrangian Functions for Cone Constrained Optimization: the Existence of Global Saddle Points and Exact Penalty Property

    Full text link
    In the article we present a general theory of augmented Lagrangian functions for cone constrained optimization problems that allows one to study almost all known augmented Lagrangians for cone constrained programs within a unified framework. We develop a new general method for proving the existence of global saddle points of augmented Lagrangian functions, called the localization principle. The localization principle unifies, generalizes and sharpens most of the known results on existence of global saddle points, and, in essence, reduces the problem of the existence of saddle points to a local analysis of optimality conditions. With the use of the localization principle we obtain first necessary and sufficient conditions for the existence of a global saddle point of an augmented Lagrangian for cone constrained minimax problems via both second and first order optimality conditions. In the second part of the paper, we present a general approach to the construction of globally exact augmented Lagrangian functions. The general approach developed in this paper allowed us not only to sharpen most of the existing results on globally exact augmented Lagrangians, but also to construct first globally exact augmented Lagrangian functions for equality constrained optimization problems, for nonlinear second order cone programs and for nonlinear semidefinite programs. These globally exact augmented Lagrangians can be utilized in order to design new superlinearly (or even quadratically) convergent optimization methods for cone constrained optimization problems.Comment: This is a preprint of an article published by Springer in Journal of Global Optimization (2018). The final authenticated version is available online at: http://dx.doi.org/10.1007/s10898-017-0603-
    corecore