48 research outputs found

    Arbitrarily tight aBB underestimators of general non-linear functions over sub-optimal domains

    Get PDF
    In this paper we explore the construction of arbitrarily tight αBB relaxations of C2 general non-linear non-convex functions. We illustrate the theoretical challenges of building such relaxations by deriving conditions under which it is possible for an αBB underestimator to provide exact bounds. We subsequently propose a methodology to build αBB underestimators which may be arbitrarily tight (i.e., the maximum separation distance between the original function and its underestimator is arbitrarily close to 0) in some domains that do not include the global solution (defined in the text as “sub-optimal”), assuming exact eigenvalue calculations are possible. This is achieved using a transformation of the original function into a μ-subenergy function and the derivation of αBB underestimators for the new function. We prove that this transformation results in a number of desirable bounding properties in certain domains. These theoretical results are validated in computational test cases where approximations of the tightest possible μ-subenergy underestimators, derived using sampling, are compared to similarly derived approximations of the tightest possible classical αBB underestimators. Our tests show that μ-subenergy underestimators produce much tighter bounds, and succeed in fathoming nodes which are impossible to fathom using classical αBB

    Nonconvex and mixed integer multiobjective optimization with an application to decision uncertainty

    Get PDF
    Multiobjective optimization problems commonly arise in different fields like economics or engineering. In general, when dealing with several conflicting objective functions, there is an infinite number of optimal solutions which cannot usually be determined analytically. This thesis presents new branch-and-bound-based approaches for computing the globally optimal solutions of multiobjective optimization problems of various types. New algorithms are proposed for smooth multiobjective nonconvex optimization problems with convex constraints as well as for multiobjective mixed integer convex optimization problems. Both algorithms guarantee a certain accuracy of the computed solutions, and belong to the first deterministic algorithms within their class of optimization problems. Additionally, a new approach to compute a covering of the optimal solution set of multiobjective optimization problems with decision uncertainty is presented. The three new algorithms are tested numerically. The results are evaluated in this thesis as well. The branch-and-bound based algorithms deal with box partitions and use selection rules, discarding tests and termination criteria. The discarding tests are the most important aspect, as they give criteria whether a box can be discarded as it does not contain any optimal solution. We present discarding tests which combine techniques from global single objective optimization with outer approximation techniques from multiobjective convex optimization and with the concept of local upper bounds from multiobjective combinatorial optimization. The new discarding tests aim to find appropriate lower bounds of subsets of the image set in order to compare them with known upper bounds numerically.Multikriterielle Optimierungprobleme sind in diversen Anwendungsgebieten wie beispielsweise in den Wirtschafts- oder Ingenieurwissenschaften zu finden. Da hierbei mehrere konkurrierende Zielfunktionen auftreten, ist die Lösungsmenge eines derartigen Optimierungsproblems im Allgemeinen unendlich groß und kann meist nicht in analytischer Form berechnet werden. In dieser Dissertation werden neue Branch-and-Bound basierte Algorithmen zur Lösung verschiedener Klassen von multikriteriellen Optimierungsproblemen entwickelt und vorgestellt. Der Branch-and-Bound Ansatz ist eine typische Methode der globalen Optimierung. Einer der neuen Algorithmen löst glatte multikriterielle nichtkonvexe Optimierungsprobleme mit konvexen Nebenbedingungen, während ein zweiter zur Lösung multikriterieller gemischt-ganzzahliger konvexer Optimierungsprobleme dient. Beide Algorithmen garantieren eine gewisse Genauigkeit der berechneten Lösungen und gehören damit zu den ersten deterministischen Algorithmen ihrer Art. Zusätzlich wird ein Algorithmus zur Berechnung einer Überdeckung der Lösungsmenge multikriterieller Optimierungsprobleme mit Entscheidungsunsicherheit vorgestellt. Alle drei Algorithmen wurden numerisch getestet. Die Ergebnisse werden ebenfalls in dieser Arbeit ausgewertet. Die neuen Algorithmen arbeiten alle mit Boxunterteilungen und nutzen Auswahlregeln, sowie Verwerfungs- und Terminierungskriterien. Dabei spielen gute Verwerfungskriterien eine zentrale Rolle. Diese entscheiden, ob eine Box verworfen werden kann, da diese sicher keine Optimallösung enthält. Die neuen Verwerfungskriterien nutzen Methoden aus der globalen skalarwertigen Optimierung, Approximationstechniken aus der multikriteriellen konvexen Optimierung sowie ein Konzept aus der kombinatorischen Optimierung. Dabei werden stets untere Schranken der Bildmengen konstruiert, die mit bisher berechneten oberen Schranken numerisch verglichen werden können

    Contributions to the moment-SOS approach in global polynomial optimization

    Get PDF
    L''Optimisation Polynomiale' s'intéresse aux problèmes d'optimisation P de la forme min {f(x): x dans K} où f est un polynôme et K est un ensemble semi-algébrique de base, c'est-à-dire défini par un nombre fini de contraintes inégalité polynomiales, K={x dans Rn : gj(x) <= 0}. Cette sous discipline de l'optimisation a émergé dans la dernière décennie grâce à la combinaison de deux facteurs: l'existence de certains résultats puissants de géométrie algébrique réelle et la puissance de l'optimisation semidéfinie (qui permet d'exploiter les premiers). Il en a résulté une méthodologie générale (que nous appelons ``moments-SOS') qui permet d'approcher aussi près que l'on veut l'optimum global de P en résolvant une hiérarchie de relaxations convexes. Cependant, chaque relaxation étant un programme semi-défini dont la taille augmente avec le rang dans la hiérarchie, malheureusement, au vu de l'état de l'art actuel des progiciels de programmation semidéfinie, cette méthodologie est pour l'instant limitée à des problèmes P de taille modeste sauf si des symétries ou de la parcimonie sont présentes dans la définition de P. Cette thèse essaie donc de répondre à la question: Peux-t-on quand même utiliser la méthodologie moments-SOS pour aider à résoudre P même si on ne peut résoudre que quelques (voire une seule) relaxations de la hiérarchie? Et si oui, comment? Nous apportons deux contributions: I. Dans une première contribution nous considérons les problèmes non convexes en variables mixtes (MINLP) pour lesquelles dans les contraintes polynomiales {g(x) <=0} où le polynôme g n'est pas concave, g est concerné par peu de variables. Pour résoudre de tels problèmes (de taille est relativement importante) on utilise en général des méthodes de type ``Branch-and-Bound'. En particulier, pour des raisons d'efficacité évidentes, à chaque nœud de l'arbre de recherche on doit calculer rapidement une borne inférieure sur l'optimum global. Pour ce faire on utilise des relaxations convexes du problème obtenues grâce à l'utilisation de sous estimateurs convexes du critère f (et des polynômes g pour les contraintes g(x)<= 0 non convexes). Notre contribution est de fournir une méthodologie générale d'obtention de tels sous estimateurs polynomiaux convexes pour tout polynôme g, sur une boite. La nouveauté de notre contribution (grâce à la méthodologie moment-SOS) est de pouvoir minimiser directement le critère d'erreur naturel qui mesure la norme L_1 de la différence f-f' entre f et son sous estimateur convexe polynomial f'. Les résultats expérimentaux confirment que le sous estimateur convexe polynomial que nous obtenons est nettement meilleur que ceux obtenus par des méthodes classiques de type ``alpha-BB' et leurs variantes, tant du point de vue du critère L_1 que du point de vue de la qualité des bornes inférieures obtenus quand on minimise f' (au lieu de f) sur la boite. II: Dans une deuxième contribution on considère des problèmes P pour lesquels seules quelques relaxations de la hiérarchie moments-SOS peuvent être implantées, par exemple celle de rang k dans la hiérarchie, et on utilise la solution de cette relaxation pour construire une solution admissible de P. Cette idée a déjà été exploitée pour certains problèmes combinatoire en variables 0/1, parfois avec des garanties de performance remarquables (par exemple pour le problème MAXCUT). Nous utilisons des résultats récents de l'approche moment-SOS en programmation polynomiale paramétrique pour définir un algorithme qui calcule une solution admissible pour P à partir d'une modification mineure de la relaxation convexe d'ordre k. L'idée de base est de considérer la variable x_1 comme un paramètre dans un intervalle Y_1 de R et on approxime la fonction ``valeur optimale' J(y) du problème d'optimisation paramétrique P(y)= min {f(x): x dans K; x_1=y} par un polynôme univarié de degré d fixé. Cette étape se ramène à la résolution d'un problème d'optimisation convexe (programme semidéfini). On calcule un minimiseur global y de J sur l'intervalle Y (un problème d'optimisation convexe ``facile') et on fixe la variable x_1=y. On itère ensuite sur les variables restantes x_2,...,x_n en prenant x_2 comme paramètre dans un intervalle Y_2, etc. jusqu'à obtenir une solution complète x de R^n qui est faisable si K est convexe ou dans certains problèmes en variables 0/1 où la faisabilité est facile à vérifier (e.g., MAXCUT, k-CLUSTTER, Knapsack). Sinon on utilise le point obtenu x comme initialisation dans un procédure d'optimisation locale pour obtenir une solution admissible. Les résultats expérimentaux obtenus sur de nombreux exemples sont très encourageants et prometteurs.Polynomial Optimization is concerned with optimization problems of the form (P) : f* = { f(x) with x in set K}, where K is a basic semi-algebraic set in Rn defined by K={x in Rn such as gj(x) less or equal 0}; and f is a real polynomial of n variables x = (x1, x2, ..., xn). In this thesis we are interested in problems (P) where symmetries and/or structured sparsity are not easy to detect or to exploit, and where only a few (or even no) semidefinite relaxations of the moment-SOS approach can be implemented. And the issue we investigate is: How can the moment-SOS methodology be still used to help solve such problem (P)? We provide two applications of the moment-SOS approach to help solve (P) in two different contexts. * In a first contribution we consider MINLP problems on a box B = [xL, xU] of Rn and propose a moment-SOS approach to construct polynomial convex underestimators for the objective function f (if non convex) and for -gj if in the constraint gj(x) less or equal 0, the polynomial gj is not concave. We work in the context where one wishes to find a convex underestimator of a non-convex polynomial f of a few variables on a box B of Rn. The novelty with previous works on this topic is that we want to compute a polynomial convex underestimator p of f that minimizes the important tightness criterion which is the L1 norm of (f-h) on B, over all convex polynomials h of degree d _fixed. Indeed in previous works for computing a convex underestimator L of f, this tightness criterion is not taken into account directly. It turns out that the moment-SOS approach is well suited to compute a polynomial convex underestimator p that minimizes the tightness criterion and numerical experiments on a sample of non-trivial examples show that p outperforms L not only with respect to the tightness score but also in terms of the resulting lower bounds obtained by minimizing respectively p and L on B. Similar improvements also occur when we use the moment-SOS underestimator instead of the aBB-one in refinements of the aBB method. * In a second contribution we propose an algorithm that also uses an optimal solution of a semidefinite relaxation in the moment-SOS hierarchy (in fact a slight modification) to provide a feasible solution for the initial optimization problem but with no rounding procedure. In the present context, we treat the first variable x1 of x = (x1, x2, ...., xn) as a parameter in some bounded interval Y of R. Notice that f*=min { J(y) : y in Y} where J is the function J(y) := inf {f(x) : x in K ; x1=y}. That is one has reduced the original n-dimensional optimization problem (P) to an equivalent one-dimensional optimization problem on an interval. But of course determining the optimal value function J is even more complicated than (P) as one has to determine a function (instead of a point in Rn), an infinite-dimensional problem. But the idea is to approximate J(y) on Y by a univariate polynomial p(y) with the degree d and fortunately, computing such a univariate polynomial is possible via solving a semidefinite relaxation associated with the parameter optimization problem. The degree d of p(y) is related to the size of this semidefinite relaxation. The higher the degree d is, the better is the approximation of J(y) by p(y) and in fact, one may show that p(y) converges to J(y) in a strong sense on Y as d increases. But of course the resulting semidefinite relaxation becomes harder (or impossible) to solve as d increases and so in practice d is fixed to a small value. Once the univariate polynomial p(y) has been determined, one computes x1* in Y that minimizes p(y) on Y, a convex optimization problem that can be solved efficiently. The process is iterated to compute x2 in a similar manner, and so on, until a point x in Rn has been computed. Finally, as x* is not feasible in general, we then use x* as a starting point for a local optimization procedure to find a final feasible point x in K. When K is convex, the following variant is implemented. After having computed x1* as indicated, x2* is computed with x1 fixed at the value x1*, and x3 is computed with x1 and x2 fixed at the values x1* and x2* respectively, etc., so that the resulting point x* is feasible, i.e., x* in K. The same variant applies for 0/1 programs for which feasibility is easy to detect like e.g., for MAXCUT, k-CLUSTER or 0/1-KNAPSACK problems

    Certification of Bounds of Non-linear Functions: the Templates Method

    Get PDF
    The aim of this work is to certify lower bounds for real-valued multivariate functions, defined by semialgebraic or transcendental expressions. The certificate must be, eventually, formally provable in a proof system such as Coq. The application range for such a tool is widespread; for instance Hales' proof of Kepler's conjecture yields thousands of inequalities. We introduce an approximation algorithm, which combines ideas of the max-plus basis method (in optimal control) and of the linear templates method developed by Manna et al. (in static analysis). This algorithm consists in bounding some of the constituents of the function by suprema of quadratic forms with a well chosen curvature. This leads to semialgebraic optimization problems, solved by sum-of-squares relaxations. Templates limit the blow up of these relaxations at the price of coarsening the approximation. We illustrate the efficiency of our framework with various examples from the literature and discuss the interfacing with Coq.Comment: 16 pages, 3 figures, 2 table

    Chebyshev model arithmetic for factorable functions

    Get PDF
    This article presents an arithmetic for the computation of Chebyshev models for factorable functions and an analysis of their convergence properties. Similar to Taylor models, Chebyshev models consist of a pair of a multivariate polynomial approximating the factorable function and an interval remainder term bounding the actual gap with this polynomial approximant. Propagation rules and local convergence bounds are established for the addition, multiplication and composition operations with Chebyshev models. The global convergence of this arithmetic as the polynomial expansion order increases is also discussed. A generic implementation of Chebyshev model arithmetic is available in the library MC++. It is shown through several numerical case studies that Chebyshev models provide tighter bounds than their Taylor model counterparts, but this comes at the price of extra computational burden

    Certification of inequalities involving transcendental functions: combining SDP and max-plus approximation

    Get PDF
    We consider the problem of certifying an inequality of the form f(x)0f(x)\geq 0, xK\forall x\in K, where ff is a multivariate transcendental function, and KK is a compact semialgebraic set. We introduce a certification method, combining semialgebraic optimization and max-plus approximation. We assume that ff is given by a syntaxic tree, the constituents of which involve semialgebraic operations as well as some transcendental functions like cos\cos, sin\sin, exp\exp, etc. We bound some of these constituents by suprema or infima of quadratic forms (max-plus approximation method, initially introduced in optimal control), leading to semialgebraic optimization problems which we solve by semidefinite relaxations. The max-plus approximation is iteratively refined and combined with branch and bound techniques to reduce the relaxation gap. Illustrative examples of application of this algorithm are provided, explaining how we solved tight inequalities issued from the Flyspeck project (one of the main purposes of which is to certify numerical inequalities used in the proof of the Kepler conjecture by Thomas Hales).Comment: 7 pages, 3 figures, 3 tables, Appears in the Proceedings of the European Control Conference ECC'13, July 17-19, 2013, Zurich, pp. 2244--2250, copyright EUCA 201

    Optimization Methods and Algorithms for Classes of Black-Box and Grey-Box Problems

    Get PDF
    There are many optimization problems in physics, chemistry, finance, computer science, engineering and operations research for which the analytical expressions of the objective and/or the constraints are unavailable. These are black-box problems where the derivative information are often not available or too expensive to approximate numerically. When the derivative information is absent, it becomes challenging to optimize and guarantee optimality of the solution. The objective of this Ph.D. work is to propose methods and algorithms to address some of the challenges of blackbox optimization (BBO). A top-down approach is taken by first addressing an easier class of black-box and then the difficulty and complexity of the problems is gradually increased. In the first part of the dissertation, a class of grey-box problems is considered for which the closed form of the objective and/or constraints are unknown, but it is possible to obtain a global upper bound on the diagonal Hessian elements. This allows the construction of an edge-concave underestimator with vertex polyhedral solution. This lower bounding technique is implemented within a branch-and-bound framework with guaranteed convergence to global optimality. The technique is applied for the optimization of problems with embedded system of ordinary differential equations (ODEs). Time dependent bounds on the state variables and the diagonal elements of the Hessian are computed by solving auxiliary set of ODEs that are derived using differential inequalities. In the second part of the dissertation, general box-constrained black-box problems are addressed for which only simulations can be performed. A novel optimization method, UNIPOPT (Univariate Projection-based Optimization) based on projection onto a univariate space is proposed. A special function is identified in this space that also contains the global minima of the original function. Computational experiments suggest that UNIPOPT often have better space exploration features compared to other approaches. The third part of the dissertation addresses general black-box problems with constraints of both known and unknown algebraic forms. An efficient two-phase algorithm based on trust-region framework is proposed for problems particularly involving high function evaluation cost. The performance of the approach is illustrated through computational experiments which evaluate its ability to reduce a merit function and find the optima
    corecore