70 research outputs found
An induction theorem and nonlinear regularity models
A general nonlinear regularity model for a set-valued mapping , where and are metric spaces, is considered
using special iteration procedures, going back to Banach, Schauder, Lusternik
and Graves. Namely, we revise the induction theorem from Khanh, J. Math. Anal.
Appl., 118 (1986) and employ it to obtain basic estimates for studying
regularity/openness properties. We also show that it can serve as a
substitution of the Ekeland variational principle when establishing other
regularity criteria. Then, we apply the induction theorem and the mentioned
estimates to establish criteria for both global and local versions of
regularity/openness properties for our model and demonstrate how the
definitions and criteria translate into the conventional setting of a
set-valued mapping .Comment: 28 page
Recommended from our members
Operator Splitting Methods for Convex and Nonconvex Optimization
This dissertation focuses on a family of optimization methods called operator splitting methods. They solve complicated problems by decomposing the problem structure into simpler pieces and make progress on each of them separately. Over the past two decades, there has been a resurgence of interests in these methods as the demand for solving structured large-scale problems grew. One of the major challenges for splitting methods is their sensitivity to ill-conditioning, which often makes them struggle to achieve a high order of accuracy. Furthermore, their classical analyses are restricted to the nice settings where solutions do exist, and everything is convex. Much less is known when either of these assumptions breaks down.This work aims to address the issues above. Specifically, we propose a novel acceleration technique called inexact preconditioning, which exploits second-order information at relatively low computation cost. We also show that certain splitting methods still work on problems without solutions, in the sense that their iterates provide information on what goes wrong and how to fix. Finally, for nonconvex problems with saddle points, we show that almost surely, splitting methods will only converge to the local minimums under certain assumptions
International Conference on Continuous Optimization (ICCOPT) 2019 Conference Book
The Sixth International Conference on Continuous Optimization took place on the campus of the Technical University of Berlin, August 3-8, 2019. The ICCOPT is a flagship conference of the Mathematical Optimization Society (MOS), organized every three years. ICCOPT 2019 was hosted by the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Berlin. It included a Summer School and a Conference with a series of plenary and semi-plenary talks, organized and contributed sessions, and poster sessions.
This book comprises the full conference program. It contains, in particular, the scientific program in survey style as well as with all details, and information on the social program, the venue, special meetings, and more
Hierarchical Structures in Equilibrium Problems
Department of Probability and Mathematical StatisticsKatedra pravdÄpodobnosti a matematickĂ© statistikyFaculty of Mathematics and PhysicsMatematicko-fyzikĂĄlnĂ fakult
Recommended from our members
Non-convex myopic electricity markets : the AC transmission network and interdependent reserve types
Electricity markets are particularly complex because they must accommodate the underlying physics that govern the electric power system. These physics present non-convexities in the social welfare maximization problem, also called the economic dispatch problem, solved by the Independent System Operator (ISO), which is the social planner in this context. The non-convexity of this problem presents difficulties in computing the social welfare maximizing dispatch as well as difficulties in deriving a pricing structure that satisfies certain economic requirements such as revenue adequacy of the ISO and non-negative operating profits for market participants. This dissertation analyzes two sources of non-convexity that pertain to two separate market changes that have been recently proposed in Texas. Both proposals pertain to the real-time electricity market, which clears every 5-minutes and is myopic in the sense that only the demand at the end of the upcoming 5-minute interval is considered and no future time intervals are considered in the social welfare maximization problem.
The Electric Reliability Council of Texas (ERCOT) is the ISO in Texas and currently neglects resistive losses along transmission lines when formulating the economic dispatch problem. The first part of this dissertation regards a proposed market change to incorporate transmission losses into the economic dispatch problem. Two general approaches are considered to accommodate associated non-convexity. Similar to current practice, the first approach is based on a marginal pricing structure and uses convex approximations that facilitate efficient computation. By utilizing various approximations, the aforementioned economic requirements are proven to be satisfied approximately. The second approach is based on an alternative pricing structure in which prices are chosen to explicitly minimize the worst-case violation of these economic requirements. For example the prices may be chosen to minimize the potential revenue shortfall of the ISO. These alternative prices are termed convex hull prices and can be approximated by use of convex relaxations.
The economic dispatch problem currently used by ERCOT does not endogenously represent operating reserves to handle contingencies that may occur. Instead, operating reserves are currently optimized separately from the electric power generation dispatch. The second part of this dissertation regards a proposed market change to co-optimize reserve and generation dispatch in a social welfare maximization problem called a co-optimization problem. Implementation of the real-time co-optimization problem is being pursued simultaneously with a new definition of the primary frequency responsive reserve types considered in the market. One of these reserve types intends to accommodate standard droop control. Another of these reserve types is newly introduced and intends to facilitate participation of fast-acting batteries in primary frequency response. This dissertation derives reserve requirements from first principles that capture the coupling of these two reserve types as well as their ramping abilities. The newly proposed non-convex requirements represent limits on the ramp-constrained primary frequency responsive reserve procurement. Placing these non-convex requirements into a co-optimization problem is proven to result in the satisfaction of the aforementioned economic requirements.Electrical and Computer Engineerin
Topics in dynamic programming
Dynamic programming is an essential tool lying at the heart of many problems in the modern theory of economic dynamics. Due to its versatility in solving dynamic optimization problems, it can be used to study the decisions of households, firms, governments, and other economic agents with a wide range of applications in macroeconomics and finance. Dynamic programming transforms dynamic optimization problems to a class of functional equations, the Bellman equations, which can be solved via appropriate mathematical tools. One of the most important tools is the contraction mapping theorem, a fixed point theorem that can be used to solve the Bellman equation under the usual discounting assumption for economic agents. However, many recent economic models often make alternative discounting assumptions under which contraction no longer holds. This is the primary motivation for the thesis.
This thesis is a re-examination of the standard discrete-time infinite horizon dynamic programming theory under two different discounting specifications: state-dependent discounting and negative discounting. For the case of state-dependent discounting, the standard discounting condition is generalized to an "eventual discounting" condition, under which the Bellman operator is a contraction in the long run, instead of a contraction in one step. For negative discounting, the theory of monotone concave operators is used to derive a unique solution to the Bellman equation; no contraction mapping arguments are required. The core results of the standard theory are extended to these two cases and economic applications are discussed
Global optimization algorithms for semi-infinite and generalized semi-infinite programs
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering, 2008.Includes bibliographical references (p. 235-249).The goals of this thesis are the development of global optimization algorithms for semi-infinite and generalized semi-infinite programs and the application of these algorithms to kinetic model reduction. The outstanding issue with semi-infinite programming (SIP) was a methodology that could provide a certificate of global optimality on finite termination for SIP with nonconvex functions participating. We have developed the first methodology that can generate guaranteed feasible points for SIP and provide e-global optimality on finite termination. The algorithm has been implemented in a branch-and-bound (B&B) framework and uses discretization coupled with convexification for the lower bounding problem and the interval constrained reformulation for the upper bounding problem. Within the framework of SIP we have also proposed a number of feasible-point methods that all rely on the same basic principle; the relaxation of the lower-level problem causes a restriction of the outer problem and vice versa. All these methodologies were tested using the Watson test set. It was concluded that the concave overestimation of the SIP constraint using McCormcick relaxations and a KKT treatment of the resulting expression is the most computationally expensive method but provides tighter bounds than the interval constrained reformulation or a concave overestimator of the SIP constraint followed by linearization. All methods can work very efficiently for small problems (1-3 parameters) but suffer from the drawback that in order to converge to the global solution value the parameter set needs to subdivided. Therefore, for problems with more than 4 parameters, intractable subproblems arise very high in the B&B tree and render global solution of the whole problem infeasible.(cont.) The second contribution of the thesis was the development of the first finite procedure that generates guaranteed feasible points and a certificate of e-global optimality for generalized semi-infinite programs (GSIP) with nonconvex functions participating. The algorithm employs interval extensions on the lower-level inequality constraints and then uses discretization and the interval constrained reformulation for the lower and upper bounding subproblems, respectively. We have demonstrated that our method can handle the irregular behavior of GSIP, such as the non-closedness of the feasible set, the existence of re-entrant corner points, the infimum not being attained and above all, problems with nonconvex functions participating. Finally, we have proposed an extensive test set consisting of both literature an original examples. Similar to the case of SIP, to guarantee e-convergence the parameter set needs to be subdivided and therefore, only small examples (1-3 parameters) can be handled in this framework in reasonable computational times (at present). The final contribution of the thesis was the development of techniques to provide optimal ranges of valid reduction between full and reduced kinetic models. First of all, we demonstrated that kinetic model reduction is a design centering problem and explored alternative optimization formulations such as SIP, GSIP and bilevel programming. Secondly, we showed that our SIP and GSIP techniques are probably not capable of handling large-scale systems, even if kinetic model reduction has a very special structure, because of the need for subdivision which leads to an explosion in the number of constraints. Finally, we propose alternative ways of estimating feasible regions of valid reduction using interval theory, critical points and line minimization.by Panayiotis Lemonidis.Ph.D
DECENTRALIZED ALGORITHMS FOR NASH EQUILIBRIUM PROBLEMS â APPLICATIONS TO MULTI-AGENT NETWORK INTERDICTION GAMES AND BEYOND
Nash equilibrium problems (NEPs) have gained popularity in recent years in the engineering community due to their ready applicability to a wide variety of practical problems ranging from communication network design to power market analysis. There are strong links between the tools used to analyze NEPs and the classical techniques of nonlinear and combinatorial optimization. However, there remain significant challenges in both the theoretical and algorithmic analysis of NEPs. This dissertation studies certain special classes of NEPs, with the overall purpose of analyzing theoretical properties such as existence and uniqueness, while at the same time proposing decentralized algorithms that provably converge to solutions. The subclasses are motivated by relevant application examples
Scalarized Preferences in Multi-objective Optimization
Multikriterielle Optimierungsprobleme verfĂŒgen ĂŒber keine Lösung, die optimal in jeder Zielfunktion ist. Die Schwierigkeit solcher Probleme liegt darin eine Kompromisslösung zu finden, die den PrĂ€ferenzen des Entscheiders genĂŒgen, der den Kompromiss implementiert. Skalarisierung â die Abbildung des Vektors der Zielfunktionswerte auf eine reelle Zahl â identifiziert eine einzige Lösung als globales PrĂ€ferenzenoptimum um diese Probleme zu lösen. Allerdings generieren Skalarisierungsmethoden keine zusĂ€tzlichen Informationen ĂŒber andere Kompromisslösungen, die die PrĂ€ferenzen des Entscheiders bezĂŒglich des globalen Optimums verĂ€ndern könnten. Um dieses Problem anzugehen stellt diese Dissertation eine theoretische und algorithmische Analyse skalarisierter PrĂ€ferenzen bereit. Die theoretische Analyse besteht aus der Entwicklung eines Ordnungsrahmens, der PrĂ€ferenzen als Problemtransformationen charakterisiert, die prĂ€ferierte Untermengen der Paretofront definieren. Skalarisierung wird als Transformation der Zielmenge in diesem Ordnungsrahmen dargestellt. Des Weiteren werden Axiome vorgeschlagen, die wĂŒnschenswerte Eigenschaften von Skalarisierungsfunktionen darstellen. Es wird gezeigt unter welchen Bedingungen existierende Skalarisierungsfunktionen diese Axiome erfĂŒllen. Die algorithmische Analyse kennzeichnet PrĂ€ferenzen anhand des Resultats, das ein Optimierungsalgorithmus generiert. Zwei neue Paradigmen werden innerhalb dieser Analyse identifiziert. FĂŒr beide Paradigmen werden Algorithmen entworfen, die skalarisierte PrĂ€ferenzeninformationen verwenden: PrĂ€ferenzen-verzerrte Paretofrontapproximationen verteilen Punkte ĂŒber die gesamte Paretofront, fokussieren aber mehr Punkte in Regionen mit besseren Skalarisierungswerten; multimodale PrĂ€ferenzenoptima sind Punkte, die lokale Skalarisierungsoptima im Zielraum darstellen. Ein Drei-Stufen-Algorith\-mus wird entwickelt, der lokale Skalarisierungsoptima approximiert und verschiedene Methoden werden fĂŒr die unterschiedlichen Stufen evaluiert. Zwei Realweltprobleme werden vorgestellt, die die NĂŒtzlichkeit der beiden Algorithmen illustrieren. Das erste Problem besteht darin FahrplĂ€ne fĂŒr ein Blockheizkraftwerk zu finden, die die erzeugte ElektrizitĂ€t und WĂ€rme maximieren und den Kraftstoffverbrauch minimiert. PrĂ€ferenzen-verzerrte Approximationen generieren mehr Energie-effiziente Lösungen, unter denen der Entscheider seine favorisierte Lösung auswĂ€hlen kann, indem er die Konflikte zwischen den drei Zielen abwĂ€gt. Das zweite Problem beschĂ€ftigt sich mit der Erstellung von FahrplĂ€nen fĂŒr GerĂ€te in einem WohngebĂ€ude, so dass Energiekosten, Kohlenstoffdioxidemissionen und thermisches Unbehagen minimiert werden. Es wird gezeigt, dass lokale Skalarisierungsoptima FahrplĂ€ne darstellen, die eine gute Balance zwischen den drei Zielen bieten. Die Analyse und die Experimente, die in dieser Arbeit vorgestellt werden, ermöglichen es Entscheidern bessere Entscheidungen zu treffen indem Methoden angewendet werden, die mehr Optionen generieren, die mit den PrĂ€ferenzen der Entscheider ĂŒbereinstimmen
- âŠ