180 research outputs found

    A Mean-Risk Mixed Integer Nonlinear Program for Network Protection

    Get PDF
    Many of the infrastructure sectors that are considered to be crucial by the Department of Homeland Security include networked systems (physical and temporal) that function to move some commodity like electricity, people, or even communication from one location of importance to another. The costs associated with these flows make up the price of the network\u27s normal functionality. These networks have limited capacities, which cause the marginal cost of a unit of flow across an edge to increase as congestion builds. In order to limit the expense of a network\u27s normal demand we aim to increase the resilience of the system and specifically the resilience of the arc capacities. Divisions of critical infrastructure have faced difficulties in recent years as inadequate resources have been available for needed upgrades and repairs. Without being able to determine future factors that cause damage both minor and extreme to the networks, officials must decide how to best allocate the limited funds now so that these essential systems can withstand the heavy weight of society\u27s reliance. We model these resource allocation decisions using a two-stage stochastic program (SP) for the purpose of network protection. Starting with a general form for a basic two-stage SP, we enforce assumptions that specify characteristics key to this type of decision model. The second stage objective---which represents the price of the network\u27s routine functionality---is nonlinear, as it reflects the increasing marginal cost per unit of additional flow across an arc. After the model has been designed properly to reflect the network protection problem, we are left with a nonconvex, nonlinear, nonseparable risk-neutral program. This research focuses on key reformulation techniques that transform the problematic model into one that is convex, separable, and much more solvable. Our approach focuses on using perspective functions to convexify the feasibility set of the second stage and second order conic constraints to represent nonlinear constraints in a form that better allows the use of computational solvers. Once these methods have been applied to the risk-neutral model we introduce a risk measure into the first stage that allows us to control the balance between an efficient, solvable model and the need to hedge against extreme events. Using Benders cuts that exploit linear separability, we give a decomposition and solution algorithm for the general network model. The innovations included in this formulation are then implemented on a transportation network with given flow demand

    Design of a flight control architecture using a non-convex bundle method

    Get PDF
    We design a feedback control architecture for longitudinal flight of an aircraft. The multi-level architecture includes the flight control loop to govern the short term dynamics of the aircraft, and the autopilot to control the long term modes. Using H1 performance and robustness criteria, the problem is cast as a non-convex and non-smooth optimization program. We present a non-convex bundle method, prove its convergence, and show that it is apt to solve the longitudinal flight control problem

    Distributionally Robust Optimization: A Review

    Full text link
    The concepts of risk-aversion, chance-constrained optimization, and robust optimization have developed significantly over the last decade. Statistical learning community has also witnessed a rapid theoretical and applied growth by relying on these concepts. A modeling framework, called distributionally robust optimization (DRO), has recently received significant attention in both the operations research and statistical learning communities. This paper surveys main concepts and contributions to DRO, and its relationships with robust optimization, risk-aversion, chance-constrained optimization, and function regularization

    Analyzing Firm Performance in the Insurance Industry Using Frontier Efficiency Methods

    Get PDF
    In this introductory chapter to an upcoming book, the authors discuss the two principal types of efficiency frontier methodologies - the econometric (parametric) approach and the mathematical programming (non-parametric) approach. Frontier efficiency methodologies are discussed as useful in a variety of contexts: they can be used for testing economic hypotheses; providing guidance to regulators and policymakers; comparimg economic performance across countries; and informing management of the effects of procedures and strategies adapted by the firm. The econometric approach requires the specification of a production, cost, revenue, or profit function as well as assumptions about error terms. But this methodology is vulnerable to errors in the specification of the functional form or error term. The mathematical programming or linear programming approach avoids this type of error and measures any departure from the frontier as a relative inefficiency. Because each of these methods has advantages and disadvantages, it is recommended to estimate efficiency using more than one method. An important step in efficiency analysis is the definition of inputs and outputs and their prices. Insurer inputs can be classified into three principal groups: labor, business services and materials, and capital. Three principal approaches have been used to measure outputs in the financial services sector: the asset or intermediation approach, the user-cost approach, and the value-added approach. The asset approach treats firms as pure financial intermediaries and would be inappropriate for insurers because they provide other services. The user-cost method determines whether a financial product is an input or output based on its net contribution to the revenues of the firm. This method requires precise data on products, revenues and opportunity costs which are difficult to estimate in insurance. The value-added approach is judged the most appropriate method for studying insurance efficiency. it considers all asset and liability categories to have some output characteristics rather than distinguishing inputs from outputs. In order to measure efficiency in the insurance industry in which outputs are mostly intangible, measurable services must be defined. The three principal services provided by insurance companies are risk pooling and risk-bearing, "real" financial services relating to insured losses, and intermediation. The authors discuss how these services can be measured as outputs in value-added analysis. They then summarize existing efficiency literature.

    Development and implementation of a B-Spline motion planning framework for autonomous mobile robots

    Get PDF
    O projeto enquadra-se na área da robótica. A ideia deste projeto é utilizar as propriedades das curvas b-spline para resolver problemas de otimização de motion planning. Esta abordagem permite desviar dos tradicionais motion planning algorithms que são normalmente utilizados. Devido á sua natureza matemática, esta abordagem permite a utilização de teoremas como o Separating Hyperplane Thereoem para realizar o desvio de obstáculos. Um aspecto importante a ter em conta é que este projeto irá ser integrado com os projetos desenvolvidos por outros alunos de modo a participar na competição The Autonomous Ship Challenge, a ser realizada na Noruega.This project fits within the area of robotics. The main idea is to utilize the properties of b-splines curves in order to solve motion planning optimization problems. This approach allows to deviate from the traditional motion planning algorithms, that are usually used. Due to its mathematical nature, this approach allows the use of theorems like the Separating Hyperplane Theorem for the obstacle avoidance problem. An important aspect to notice is that this project will be integrated with the other projects developed by other students in order to participate in "The Autonomous Ship Challenge" competition to be held in Norway

    Multi-Period Natural Gas Market Modeling - Applications, Stochastic Extensions and Solution Approaches

    Get PDF
    This dissertation develops deterministic and stochastic multi-period mixed complementarity problems (MCP) for the global natural gas market, as well as solution approaches for large-scale stochastic MCP. The deterministic model is unique in the combination of the level of detail of the actors in the natural gas markets and the transport options, the detailed regional and global coverage, the multi-period approach with endogenous capacity expansions for transportation and storage infrastructure, the seasonal variation in demand and the representation of market power according to Nash-Cournot theory. The model is applied to several scenarios for the natural gas market that cover the formation of a cartel by the members of the Gas Exporting Countries Forum, a low availability of unconventional gas in the United States, and cost reductions in long-distance gas transportation. The results provide insights in how different regions are affected by various developments, in terms of production, consumption, traded volumes, prices and profits of market participants. The stochastic MCP is developed and applied to a global natural gas market problem with four scenarios for a time horizon until 2050 with nineteen regions and containing 78,768 variables. The scenarios vary in the possibility of a gas market cartel formation and varying depletion rates of gas reserves in the major gas importing regions. Outcomes for hedging decisions of market participants show some significant shifts in the timing and location of infrastructure investments, thereby affecting local market situations. A first application of Benders decomposition (BD) is presented to solve a large-scale stochastic MCP for the global gas market with many hundreds of first-stage capacity expansion variables and market players exerting various levels of market power. The largest problem solved successfully using BD contained 47,373 variables of which 763 first-stage variables, however using BD did not result in shorter solution times relative to solving the extensive-forms. Larger problems, up to 117,481 variables, were solved in extensive-form, but not when applying BD due to numerical issues. It is discussed how BD could significantly reduce the solution time of large-scale stochastic models, but various challenges remain and more research is needed to assess the potential of Benders decomposition for solving large-scale stochastic MCP

    Split rank of triangle and quadrilateral inequalities

    Get PDF
    A simple relaxation of two rows of a simplex tableau is a mixed integer set consisting of two equations with two free integer variables and non-negative continuous variables. Recently Andersen et al. [2] and Cornu´ejols and Margot [13] showed that the facet-defining inequalities of this set are either split cuts or intersection cuts obtained from lattice-free triangles and quadrilaterals. Through a result by Cook et al. [12], it is known that one particular class of facet- defining triangle inequality does not have a finite split rank. In this paper, we show that all other facet-defining triangle and quadrilateral inequalities have finite split rank. The proof is constructive and given a facet-defining triangle or quadrilateral inequality we present an explicit sequence of split inequalities that can be used to generate it.mixed integer programs, split rank, group relaxations

    Efficiency in Machine Learning with Focus on Deep Learning and Recommender Systems

    Full text link
    Machine learning algorithms have opened up countless doors for scientists tackling problems that had previously been inaccessible, and the applications of these algorithms are far from exhausted. However, as the complexity of the learning problem grows, so does the computational and memory cost of the appropriate learning algorithm. As a result, the training process for computationally heavy algorithms can take weeks or even months to reach a good result, which can be prohibitively expensive. The general inefficiencies of machine learning algorithms is a significant bottleneck slowing the progress in application sciences. This thesis introduces three new methods of improving the efficiency of machine learning algorithms focusing on expensive algorithms such as neural networks and recommender systems. The first method discussed makes structured reductions of fully connected layers in neural networks, which causes speedup during training and decreases the amount of storage required. The second method presented is an accelerated gradient descent method called Predictor-Corrector Gradient Descent (PCGD) that combines predictor-corrector techniques with stochastic gradient descent. The final technique introduced generates Artificial Core Users (ACUs) from the Core Users of a recommendation dataset. Core Users condense the number of users in a recommendation dataset without significant loss of information; Artificial Core Users improve the recommendation accuracy of Core Users yet still mimic real user data.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/162928/1/anesky_1.pd
    corecore