14,579 research outputs found

    Global optimality conditions and optimization methods for polynomial programming problems and their applications

    Get PDF
    The polynomial programming problem which has a polynomial objective function, either with no constraints or with polynomial constraints occurs frequently in engineering design, investment science, control theory, network distribution, signal processing and locationallocation contexts. Moreover, the polynomial programming problem is known to be Nondeterministic Polynomial-time hard (NP-hard). The polynomial programming problem has attracted a lot of attention, including quadratic, cubic, homogenous or normal quartic programming problems as special cases. Existing methods for solving polynomial programming problems include algebraic methods and various convex relaxation methods. Especially, among these methods, semidefinite programming (SDP) and sum of squares (SOS) relaxations are very popular. Theoretically, SDP and SOS relaxation methods are very powerful and successful in solving the general polynomial programming problem with a compact feasible region. However, the solvability in practice depends on the size or the degree of the polynomial programming problem and the required accuracy. Hence, solving large scale SDP problems still remains a computational challenge. It is well-known that traditional local optimization methods are designed based on necessary local optimality conditions, i.e., Karush-Kuhn-Tucker (KKT) conditions. Motivated by this, some researchers proposed a necessary global optimality condition for a quadratic programming problem and designed a new local optimization method according to the necessary global optimality condition. In this thesis, we try to apply this idea to cubic and quatic programming problems, and further to general unconstrained and constrained polynomial programming problems. For these polynomial programming problems, we will investigate necessary global optimality conditions and design new local optimization methods according to these conditions. These necessary global optimality conditions are generally stronger than KKT conditions. Hence, the obtained new local minimizers by using the new local optimization methods may improve some KKT points. Our ultimate aim is to design global optimization methods for these polynomial programming problems. We notice that the filled function method is one of the well-known and practical auxiliary function methods used to achieve a global minimizer. In this thesis, we design global optimization methods by combining the new proposed local optimization methods and some auxiliary functions. The numerical examples illustrate the efficiency and stability of the optimization methods. Finally, we discuss some applications for solving some sensor network localization problems and systems of polynomial equations. It is worth mentioning that we apply the idea and the results for polynomial programming problems to nonlinear programming problems (NLP). We provide an optimality condition and design new local optimization methods according to the optimality condition and design global optimization methods for the problem (NLP) by combining the new local optimization methods and an auxiliary function. In order to test the performance of the global optimization methods, we compare them with two other heuristic methods. The results demonstrate our methods outperform the two other algorithms.Doctor of Philosoph

    Global optimality conditions and optimization methods for constrained polynomial programming problems

    Get PDF
    The general constrained polynomial programming problem (GPP) is considered in this paper. Problem (GPP) has a broad range of applications and is proved to be NP-hard. Necessary global optimality conditions for problem (GPP) are established. Then, a new local optimization method for this problem is proposed by exploiting these necessary global optimality conditions. A global optimization method is proposed for this problem by combining this local optimization method together with an auxiliary function. Some numerical examples are also given to illustrate that these approaches are very efficient. (C) 2015 Elsevier Inc. All rights reserved

    Conic Optimization Theory: Convexification Techniques and Numerical Algorithms

    Full text link
    Optimization is at the core of control theory and appears in several areas of this field, such as optimal control, distributed control, system identification, robust control, state estimation, model predictive control and dynamic programming. The recent advances in various topics of modern optimization have also been revamping the area of machine learning. Motivated by the crucial role of optimization theory in the design, analysis, control and operation of real-world systems, this tutorial paper offers a detailed overview of some major advances in this area, namely conic optimization and its emerging applications. First, we discuss the importance of conic optimization in different areas. Then, we explain seminal results on the design of hierarchies of convex relaxations for a wide range of nonconvex problems. Finally, we study different numerical algorithms for large-scale conic optimization problems.Comment: 18 page

    On the Burer-Monteiro method for general semidefinite programs

    Full text link
    Consider a semidefinite program (SDP) involving an n×nn\times n positive semidefinite matrix XX. The Burer-Monteiro method uses the substitution X=YYTX=Y Y^T to obtain a nonconvex optimization problem in terms of an n×pn\times p matrix YY. Boumal et al. showed that this nonconvex method provably solves equality-constrained SDPs with a generic cost matrix when p≳2mp \gtrsim \sqrt{2m}, where mm is the number of constraints. In this note we extend their result to arbitrary SDPs, possibly involving inequalities or multiple semidefinite constraints. We derive similar guarantees for a fixed cost matrix and generic constraints. We illustrate applications to matrix sensing and integer quadratic minimization.Comment: 10 page

    Maximum block improvement and polynomial optimization

    Get PDF

    Partitioning Procedure for Polynomial Optimization: Application to Portfolio Decisions with Higher Order Moments

    Get PDF
    We consider the problem of finding the minimum of a real-valued multivariate polynomial function constrained in a compact set defined by polynomial inequalities and equalities. This problem, called polynomial optimization problem (POP), is generally nonconvex and has been of growing interest to many researchers in recent years. Our goal is to tackle POPs using decomposition. Towards this goal we introduce a partitioning procedure. The problem manipulations are in line with the pattern used in the Benders decomposition [1], namely relaxation preceded by projection. Stengle’s and Putinar’s Positivstellensatz are employed to derive the so-called feasibility and optimality constraints, respectively. We test the performance of the proposed method on a collection of benchmark problems and we present the numerical results. As an application, we consider the problem of selecting an investment portfolio optimizing the mean, variance, skewness and kurtosis of the portfolio.Polynomial optimization, Semidefinite relaxations, Positivstellensatz, Sum of squares, Benders decomposition, Portfolio optimization
    • …
    corecore