6 research outputs found

    Computation of Polytopic Invariants for Polynomial Dynamical Systems using Linear Programming

    Full text link
    This paper deals with the computation of polytopic invariant sets for polynomial dynamical systems. An invariant set of a dynamical system is a subset of the state space such that if the state of the system belongs to the set at a given instant, it will remain in the set forever in the future. Polytopic invariants for polynomial systems can be verified by solving a set of optimization problems involving multivariate polynomials on bounded polytopes. Using the blossoming principle together with properties of multi-affine functions on rectangles and Lagrangian duality, we show that certified lower bounds of the optimal values of such optimization problems can be computed effectively using linear programs. This allows us to propose a method based on linear programming for verifying polytopic invariant sets of polynomial dynamical systems. Additionally, using sensitivity analysis of linear programs, one can iteratively compute a polytopic invariant set. Finally, we show using a set of examples borrowed from biological applications, that our approach is effective in practice

    A New Global Optimization Algorithm for Solving a Class of Nonconvex Programming Problems

    Get PDF
    A new two-part parametric linearization technique is proposed globally to a class of nonconvex programming problems (NPP). Firstly, a two-part parametric linearization method is adopted to construct the underestimator of objective and constraint functions, by utilizing a transformation and a parametric linear upper bounding function (LUBF) and a linear lower bounding function (LLBF) of a natural logarithm function and an exponential function with e as the base, respectively. Then, a sequence of relaxation lower linear programming problems, which are embedded in a branch-and-bound algorithm, are derived in an initial nonconvex programming problem. The proposed algorithm is converged to global optimal solution by means of a subsequent solution to a series of linear programming problems. Finally, some examples are given to illustrate the feasibility of the presented algorithm

    Global optimality conditions and optimization methods for polynomial programming problems and their applications

    Get PDF
    The polynomial programming problem which has a polynomial objective function, either with no constraints or with polynomial constraints occurs frequently in engineering design, investment science, control theory, network distribution, signal processing and locationallocation contexts. Moreover, the polynomial programming problem is known to be Nondeterministic Polynomial-time hard (NP-hard). The polynomial programming problem has attracted a lot of attention, including quadratic, cubic, homogenous or normal quartic programming problems as special cases. Existing methods for solving polynomial programming problems include algebraic methods and various convex relaxation methods. Especially, among these methods, semidefinite programming (SDP) and sum of squares (SOS) relaxations are very popular. Theoretically, SDP and SOS relaxation methods are very powerful and successful in solving the general polynomial programming problem with a compact feasible region. However, the solvability in practice depends on the size or the degree of the polynomial programming problem and the required accuracy. Hence, solving large scale SDP problems still remains a computational challenge. It is well-known that traditional local optimization methods are designed based on necessary local optimality conditions, i.e., Karush-Kuhn-Tucker (KKT) conditions. Motivated by this, some researchers proposed a necessary global optimality condition for a quadratic programming problem and designed a new local optimization method according to the necessary global optimality condition. In this thesis, we try to apply this idea to cubic and quatic programming problems, and further to general unconstrained and constrained polynomial programming problems. For these polynomial programming problems, we will investigate necessary global optimality conditions and design new local optimization methods according to these conditions. These necessary global optimality conditions are generally stronger than KKT conditions. Hence, the obtained new local minimizers by using the new local optimization methods may improve some KKT points. Our ultimate aim is to design global optimization methods for these polynomial programming problems. We notice that the filled function method is one of the well-known and practical auxiliary function methods used to achieve a global minimizer. In this thesis, we design global optimization methods by combining the new proposed local optimization methods and some auxiliary functions. The numerical examples illustrate the efficiency and stability of the optimization methods. Finally, we discuss some applications for solving some sensor network localization problems and systems of polynomial equations. It is worth mentioning that we apply the idea and the results for polynomial programming problems to nonlinear programming problems (NLP). We provide an optimality condition and design new local optimization methods according to the optimality condition and design global optimization methods for the problem (NLP) by combining the new local optimization methods and an auxiliary function. In order to test the performance of the global optimization methods, we compare them with two other heuristic methods. The results demonstrate our methods outperform the two other algorithms.Doctor of Philosoph
    corecore