85 research outputs found

    A Newton Collocation Method for Solving Dynamic Bargaining Games

    Get PDF
    We develop and implement a collocation method to solve for an equilibrium in the dynamic legislative bargaining game of Duggan and Kalandrakis (2008). We formulate the collocation equations in a quasi-discrete version of the model, and we show that the collocation equations are locally Lipchitz continuous and directionally differentiable. In numerical experiments, we successfully implement a globally convergent variant of Broyden's method on a preconditioned version of the collocation equations, and the method economizes on computation cost by more than 50% compared to the value iteration method. We rely on a continuity property of the equilibrium set to obtain increasingly precise approximations of solutions to the continuum model. We showcase these techniques with an illustration of the dynamic core convergence theorem of Duggan and Kalandrakis (2008) in a nine-player, two-dimensional model with negative quadratic preferences.

    A Penalty Method for Correlation Matrix Problems with Prescribed Constraints

    Get PDF
    Master'sMASTER OF SCIENC

    Structured Low Rank Matrix Optimization Problems: A Penalty Approach

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    An allocation based modeling and solution framework for location problems with dense demand /

    Get PDF
    In this thesis we present a unified framework for planar location-allocation problems with dense demand. Emergence of such information technologies as Geographical Information Systems (GIS) has enabled access to detailed demand information. This proliferation of demand data brings about serious computational challenges for traditional approaches which are based on discrete demand representation. Furthermore, traditional approaches model the problem in location variable space and decide on the allocation decisions optimally given the locations. This is equivalent to prioritizing location decisions. However, when allocation decisions are more decisive or choice of exact locations is a later stage decision, then we need to prioritize allocation decisions. Motivated by these trends and challenges, we herein adopt a modeling and solution approach in the allocation variable space.Our approach has two fundamental characteristics: Demand representation in the form of continuous density functions and allocation decisions in the form of service regions. Accordingly, our framework is based on continuous optimization models and solution methods. On a plane, service regions (allocation decisions) assume different shapes depending on the metric chosen. Hence, this thesis presents separate approaches for two-dimensional Euclidean-metric and Manhattan-metric based distance measures. Further, we can classify the solution approaches of this thesis as constructive and improvement-based procedures. We show that constructive solution approach, namely the shooting algorithm, is an efficient procedure for solving both the single dimensional n-facility and planar 2-facility problems. While constructive solution approach is analogous for both metric cases, improvement approach differs due to the shapes of the service regions. In the Euclidean-metric case, a pair of service regions is separated by a straight line, however, in the Manhattan metric, separation takes place in the shape of three (at most) line segments. For planar 2-facility Euclidean-metric problems, we show that shape preserving transformations (rotation and translation) of a line allows us to design improvement-based solution approaches. Furthermore, we extend this shape preserving transformation concept to n-facility case via vertex-iteration based improvement approach and design first-order and second-order solution methods. In the case of planar 2-facility Manhattan-metric problems, we adopt translation as the shape-preserving transformation for each line segment and develop an improvement-based solution approach. For n-facility case, we provide a hybrid algorithm. Lastly, we provide results of a computational study and complexity results of our vertex-based algorithm

    Acta Cybernetica : Volume 20. Number 1.

    Get PDF

    Recent Experiences in Multidisciplinary Analysis and Optimization, part 2

    Get PDF
    The papers presented at the NASA Symposium on Recent Experiences in Multidisciplinary Analysis and Optimization held at NASA Langley Research Center, Hampton, Virginia, April 24 to 26, 1984 are given. The purposes of the symposium were to exchange information about the status of the application of optimization and the associated analyses in industry or research laboratories to real life problems and to examine the directions of future developments

    Robust simulation and optimization methods for natural gas liquefaction processes

    Get PDF
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Chemical Engineering, 2018.Cataloged from PDF version of thesis.Includes bibliographical references (pages 313-324).Natural gas is one of the world's leading sources of fuel in terms of both global production and consumption. The abundance of reserves that may be developed at relatively low cost, paired with escalating societal and regulatory pressures to harness low carbon fuels, situates natural gas in a position of growing importance to the global energy landscape. However, the nonuniform distribution of readily-developable natural gas sources around the world necessitates the existence of an international gas market that can serve those regions without reasonable access to reserves. International transmission of natural gas via pipeline is generally cost-prohibitive beyond around two thousand miles, and so suppliers instead turn to the production of liquefied natural gas (LNG) to yield a tradable commodity. While the production of LNG is by no means a new technology, it has not occupied a dominant role in the gas trade to date. However, significant growth in LNG exports has been observed within the last few years, and this trend is expected to continue as major new liquefaction operations have and continue to become operational worldwide. Liquefaction of natural gas is an energy-intensive process requiring specialized cryogenic equipment, and is therefore expensive both in terms of operating and capital costs. However, optimization of liquefaction processes is greatly complicated by the inherently complex thermodynamic behavior of process streams that simultaneously change phase and exchange heat at closely-matched cryogenic temperatures. The determination of optimal conditions for a given process will also generally be nontransferable information between LNG plants, as both the specifics of design (e.g. heat exchanger size and configuration) and the operation (e.g. source gas composition) may have significantly variability between sites. Rigorous evaluation of process concepts for new production facilities is also challenging to perform, as economic objectives must be optimized in the presence of constraints involving equipment size and safety precautions even in the initial design phase. The absence of reliable and versatile software to perform such tasks was the impetus for this thesis project. To address these challenging problems, the aim of this thesis was to develop new models, methods and algorithms for robust liquefaction process simulation and optimization, and to synthesize these advances into reliable and versatile software. Recent advances in the sensitivity analysis of nondifferentiable functions provided an advantageous foundation for the development of physically-informed yet compact process models that could be embedded in established simulation and optimization algorithms with strong convergence properties. Within this framework, a nonsmooth model for the core unit operation in all industrially-relevant liquefaction processes, the multi-stream heat exchanger, was first formulated. The initial multistream heat exchanger model was then augmented to detect and handle internal phase transitions, and an extension of a classic vapor-liquid equilibrium model was proposed to account for the potential existence of solutions in single-phase regimes, all through the use of additional nonsmooth equations. While these initial advances enabled the simulation of liquefaction processes under the conditions of simple, idealized thermodynamic models, it became apparent that these methods would be unable to handle calculations involving nonideal thermophysical property models reliably. To this end, robust nonsmooth extensions of the celebrated inside-out algorithms were developed. These algorithms allow for challenging phase equilibrium calculations to be performed successfully even in the absence of knowledge about the phase regime of the solution, as is the case when model parameters are chosen by a simulation or optimization algorithm. However, this still was not enough to equip realistic liquefaction process models with a completely reliable thermodynamics package, and so new nonsmooth algorithms were designed for the reasonable extrapolation of density from an equation of state under conditions where a given phase does not exist. This procedure greatly enhanced the ability of the nonsmooth inside-out algorithms to converge to physical solutions for mixtures at very high temperature and pressure. These models and submodels were then integrated into a flowsheeting framework to perform realistic simulations of natural gas liquefaction processes robustly, efficiently and with extremely high accuracy. A reliable optimization strategy using an interior-point method and the nonsmooth process models was then developed for complex problem formulations that rigorously minimize thermodynamic irreversibilities. This approach significantly outperforms other strategies proposed in the literature or implemented in commercial software in terms of the ease of initialization, convergence rate and quality of solutions found. The performance observed and results obtained suggest that modeling and optimizing such processes using nondifferentiable models and appropriate sensitivity analysis techniques is a promising new approach to these challenging problems. Indeed, while liquefaction processes motivated this thesis, the majority of the methods described herein are applicable in general to processes with complex thermodynamic or heat transfer considerations embedded. It is conceivable that these models and algorithms could therefore inform a new, robust generation of process simulation and optimization software.by Harry Alexander James Watson.Ph. D

    Convex relaxation methods for graphical models : Lagrangian and maximum entropy approaches

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 241-257).Graphical models provide compact representations of complex probability distributions of many random variables through a collection of potential functions defined on small subsets of these variables. This representation is defined with respect to a graph in which nodes represent random variables and edges represent the interactions among those random variables. Graphical models provide a powerful and flexible approach to many problems in science and engineering, but also present serious challenges owing to the intractability of optimal inference and estimation over general graphs. In this thesis, we consider convex optimization methods to address two central problems that commonly arise for graphical models. First, we consider the problem of determining the most probable configuration-also known as the maximum a posteriori (MAP) estimate-of all variables in a graphical model, conditioned on (possibly noisy) measurements of some variables. This general problem is intractable, so we consider a Lagrangian relaxation (LR) approach to obtain a tractable dual problem. This involves using the Lagrangian decomposition technique to break up an intractable graph into tractable subgraphs, such as small "blocks" of nodes, embedded trees or thin subgraphs. We develop a distributed, iterative algorithm that minimizes the Lagrangian dual function by block coordinate descent. This results in an iterative marginal-matching procedure that enforces consistency among the subgraphs using an adaptation of the well-known iterative scaling algorithm. This approach is developed both for discrete variable and Gaussian graphical models. In discrete models, we also introduce a deterministic annealing procedure, which introduces a temperature parameter to define a smoothed dual function and then gradually reduces the temperature to recover the (non-differentiable) Lagrangian dual. When strong duality holds, we recover the optimal MAP estimate. We show that this occurs for a broad class of "convex decomposable" Gaussian graphical models, which generalizes the "pairwise normalizable" condition known to be important for iterative estimation in Gaussian models.(cont.) In certain "frustrated" discrete models a duality gap can occur using simple versions of our approach. We consider methods that adaptively enhance the dual formulation, by including more complex subgraphs, so as to reduce the duality gap. In many cases we are able to eliminate the duality gap and obtain the optimal MAP estimate in a tractable manner. We also propose a heuristic method to obtain approximate solutions in cases where there is a duality gap. Second, we consider the problem of learning a graphical model (both the graph and its potential functions) from sample data. We propose the maximum entropy relaxation (MER) method, which is the convex optimization problem of selecting the least informative (maximum entropy) model over an exponential family of graphical models subject to constraints that small subsets of variables should have marginal distributions that are close to the distribution of sample data. We use relative entropy to measure the divergence between marginal probability distributions. We find that MER leads naturally to selection of sparse graphical models. To identify this sparse graph efficiently, we use a "bootstrap" method that constructs the MER solution by solving a sequence of tractable subproblems defined over thin graphs, including new edges at each step to correct for large marginal divergences that violate the MER constraint. The MER problem on each of these subgraphs is efficiently solved using the primaldual interior point method (implemented so as to take advantage of efficient inference methods for thin graphical models). We also consider a dual formulation of MER that minimizes a convex function of the potentials of the graphical model. This MER dual problem can be interpreted as a robust version of maximum-likelihood parameter estimation, where the MER constraints specify the uncertainty in the sufficient statistics of the model. This also corresponds to a regularized maximum-likelihood approach, in which an information-geometric regularization term favors selection of sparse potential representations. We develop a relaxed version of the iterative scaling method to solve this MER dual problem.by Jason K. Johnson.Ph.D
    corecore