953 research outputs found

    Disjunctive Aspects in Generalized Semi-infinite Programming

    Get PDF
    In this thesis the close relationship between generalized semi-infinite problems (GSIP) and disjunctive problems (DP) is considered. We start with the description of some optimization problems from timber industry and illustrate how GSIPs and DPs arise naturally in that field. Three different applications are reviewed. Next, theory and solution methods for both types of problems are examined. We describe a new possibility to model disjunctive optimization problems as generalized semi-infinite programs. Applying existing lower level reformulations for the obtained semi-infinite program we derive conjunctive nonlinear problems without any logical expressions, which can be locally solved by standard nonlinear solvers. In addition to this local solution procedure we propose a new branch-and-bound framework for global optimization of disjunctive programs. In contrast to the widely used reformulation as a mixed-integer program, we compute the lower bounds and evaluate the logical expression in one step. Thus, we reduce the size of the problem and work exclusively with continuous variables, which is computationally advantageous. In contrast to existing methods in disjunctive programming, none of our approaches expects any special formulation of the underlying logical expression. Where applicable, under slightly stronger assumptions, even the use of negations and implications is allowed. Our preliminary numerical results show that both procedures, the reformulation technique as well as the branch-and-bound algorithm, are reasonable methods to solve disjunctive optimization problems locally and globally, respectively. In the last part of this thesis we propose a new branch-and-bound algorithm for global minimization of box-constrained generalized semi-infinite programs. It treats the inherent disjunctive structure of these problems by tailored lower bounding procedures. Three different possibilities are examined. The first one relies on standard lower bounding procedures from conjunctive global optimization. The second and the third alternative are based on linearization techniques by which we derive linear disjunctive relaxations of the considered sub-problems. Solving these by either mixed-integer linear reformulations or, alternatively, by disjunctive linear programming techniques yields two additional possibilities. Our numerical results on standard test problems with these three lower bounding procedures show the merits of our approach

    Nonlinear Integer Programming

    Full text link
    Research efforts of the past fifty years have led to a development of linear integer programming as a mature discipline of mathematical optimization. Such a level of maturity has not been reached when one considers nonlinear systems subject to integrality requirements for the variables. This chapter is dedicated to this topic. The primary goal is a study of a simple version of general nonlinear integer problems, where all constraints are still linear. Our focus is on the computational complexity of the problem, which varies significantly with the type of nonlinear objective function in combination with the underlying combinatorial structure. Numerous boundary cases of complexity emerge, which sometimes surprisingly lead even to polynomial time algorithms. We also cover recent successful approaches for more general classes of problems. Though no positive theoretical efficiency results are available, nor are they likely to ever be available, these seem to be the currently most successful and interesting approaches for solving practical problems. It is our belief that the study of algorithms motivated by theoretical considerations and those motivated by our desire to solve practical instances should and do inform one another. So it is with this viewpoint that we present the subject, and it is in this direction that we hope to spark further research.Comment: 57 pages. To appear in: M. J\"unger, T. Liebling, D. Naddef, G. Nemhauser, W. Pulleyblank, G. Reinelt, G. Rinaldi, and L. Wolsey (eds.), 50 Years of Integer Programming 1958--2008: The Early Years and State-of-the-Art Surveys, Springer-Verlag, 2009, ISBN 354068274

    A coarse solution of generalized semi-infinite optimization problems via robust analysis of marginal functions and global optimization

    Get PDF
    Die Arbeit beschäftigt sich überwiegend mit theoretischen Untersuchungen zur Bestimmung grober Startlösungen für verallgemeinerte semi-infinite Optimierungsaufgaben (GSIP) mit Methoden der globalen Optimierung. GSIP Probleme besitzen im Gegensatz zu den gewöhnlichen semi-infiniten Optimierungsaufgaben (SIP) die Eigenschaft, dass die Indexmenge, die die Restriktionen beschreibt, natürlich überabzählbar ist, wie bei (SIP) aber darüber hinaus von den Problemvariablen abhängig ist, d.h. die Indexmenge ist eine Punkt-Menge Abbildung. Solche Probleme sind von sehr komplexer Struktur, gleichzeitig gibt es große Klassen von naturwissenschaftlich - technischen, ökonomischen Problemen, die in (GSIP) modelliert werden können. Im allgemeinem ist die zulässige Menge von einem (GSIP) weder abgeschlossen noch zusammenhängend. Die Abgeschlossenheit von der zulässigen Menge ist gesichert durch die Unterhalbstetigkeit der Index-Abbildung. Viele Autoren machen diese Voraussetzung, um numerische Verfahren für (GSIP) herzuleiten. Diese Arbeit versucht erstmals, ohne Unterhalbstetigkeit der Index-Abbildung auszukommen. Unter diese schwächeren Voraussetzungen kann die zulässige Menge nicht abgeschlossen sein und (GSIP) kann auch keine Lösung besitzen. Trotzdem kann man eine verallgemeinerte Minimalstelle oder eine Minimalfolge für (GSIP) bestimmen. Für diese Zwecke werden zwei numerische Zugänge vorgeschlagen. Im ersten Zugang wird der zulässige Bereich des (GSIP) durch eine (gewöhnliche) parametrische semi- infinite Approximationsaufgabe beschrieben. Die Marginalfunktion der parametrischen Aufgabe ist eine exakte Straffunktion des zulässigen Bereiches des (GSIP). Im zweiten Zugang werden zwei Straffunktionen vorgestellt. Eine verwendet die semi-infinite Restriktion direkt als einen "Max"-Straffterm und die zweite entsteht durch das "lower level Problem" des (GSIP). In beiden Zugänge müssen wir uns mit unstetigen Optimierungsaufgaben beschäftigen. Es wird gezeigt, dass die entstehende Straffunktionen oberrobust (i.A. nicht stetig) sind und damit auch hier stochastische globale Optimierungsmethoden prinzipiell anwendbar sind. Der Hauptbeitrag dieser Arbeit ist die Untersuchung von Robustheiteigenschaften von Marginalfunktionen und Punkt-Menkg-Abbildung mit bestimmte Strukturen. Dieser kann auch als eine Erweiterung der Theorie der Robusten Analysis von Chew & Zheng betrachtet werden. Gleichzeitig wird gezeigt, dass die für halbstetigen Abbildungen und Funktionen bekannten Aussagen bis auf wenige Ausnahmen in Bezug auf das Robustheitskonzept übertragen werden können. Am Ende zeigen einige numerische Beispiele, dass die vorgeschlagenen Zugänge prinzipiell brauchbar sind.The aim of this work is to determine a coarse approximation to the optimal solution of a class of generalized semi-infinite optimization problems (GSIP) through a global optimization method by using fairly discontinuous penalty functions. Where the fairness of the discontinuities is characterized by the notions of robust analysis and standard measure theory. Generalized semi-infinite optimization problems have an infinite number of constraints, where the usually infinite index set of the constraints varies with respect to the problem variable; i.e. we have a set-valued map as and index set, in contrast to standard semi-infinite optimization (SIP) problems. These problems have very complex problem structures, at the same time, there are several classes of scientific, engineering, econimic, etc., problems which could be modelled in terms of (GSIP)s. Under general assumptions, the feasible set of a (GSIP) might not be closed nor connected. In fact, the feasible set is a closed set if the index map is lower semi-continuous. Several authors assume the lower semi-continuity of the index map for the derivation of numerical algorithms for (GSIP). However, in this work no exclusive assumption has been made to preserve the above nicer structures. Thus, the feasible set may not be closed and (GSIP) may not have a solution. However, one may be interested to determine a generalized minimizer or a minimizing sequence of GSIP. For this purpose, two penalty approaches have been proposed. In the first approach (mainly conceptual), there is defined a discontinuous penalty function based on the marginal function of a certain auxiliary parametric semi-infinite optimization problem (PSIP). In the second approach (based on discretization), we define two penalty functions: one based on the marginal function of the lower level problem and, a second, based on the feasible set of (GSIP). The relationships of these penalty problems with the (GSIP) have been investigated through minimizing sequences. In the two penalty approaches we need to deal with discontinuous optimization problems. The numerical treatment of these discontinuous optimization problems can be done by using the Integral Global Optimization Method (IGOM); in particular, through the software routine called BARLO (of Hichert). However, to use BARLO or IGOM we need to verify certain robustness properties of the objective functions of the penalty problems. Hence, one major contribution of this work is a study of robustness properties of marginal value functions and set-valued maps with given structures - extending the theory of robust analysis of Chew and Zheng. At the same time, an effort has been made to find out corresponding robustness results to some standard continuity notions of functions and set-valued maps. To show the viability of the proposed approach, numerical experiments are made using the penalty-discretization approach

    Equilibrium modeling and solution approaches inspired by nonconvex bilevel programming

    Full text link
    This paper introduces the concept of optimization equilibrium as an equivalently versatile definition of a generalized Nash equilibrium for multi-agent non-cooperative games. Through this modified definition of equilibrium, we draw precise connections between generalized Nash equilibria, feasibility for bilevel programming, the Nikaido-Isoda function, and classic arguments involving Lagrangian duality and social welfare maximization. Significantly, this is all in a general setting without the assumption of convexity. Along the way, we introduce the idea of minimum disequilibrium as a solution concept that reduces to traditional equilibrium when equilibrium exists. The connections with bilevel programming and related semi-infinite programming permit us to adapt global optimization methods for those classes of problems, such as constraint generation or cutting plane methods, to the problem of finding a minimum disequilibrium solution. We show that this method works, both theoretically and with a numerical example, even when the agents are modeled by mixed-integer programs
    corecore