453 research outputs found

    Data-Driven Estimation in Equilibrium Using Inverse Optimization

    Get PDF
    Equilibrium modeling is common in a variety of fields such as game theory and transportation science. The inputs for these models, however, are often difficult to estimate, while their outputs, i.e., the equilibria they are meant to describe, are often directly observable. By combining ideas from inverse optimization with the theory of variational inequalities, we develop an efficient, data-driven technique for estimating the parameters of these models from observed equilibria. We use this technique to estimate the utility functions of players in a game from their observed actions and to estimate the congestion function on a road network from traffic count data. A distinguishing feature of our approach is that it supports both parametric and \emph{nonparametric} estimation by leveraging ideas from statistical learning (kernel methods and regularization operators). In computational experiments involving Nash and Wardrop equilibria in a nonparametric setting, we find that a) we effectively estimate the unknown demand or congestion function, respectively, and b) our proposed regularization technique substantially improves the out-of-sample performance of our estimators.Comment: 36 pages, 5 figures Additional theorems for generalization guarantees and statistical analysis adde

    Comparison of the formulations for a hub-and-spoke network design problem under congestion

    Get PDF
    In this paper, we study the hub location problem with a power-law congestion cost and propose an exact solution approach. We formulate this problem in a conic quadratic form and use a strengthening method which rests on valid inequalities of perspective cuts in mixed integer nonlinear programming. In a numerical study, we compare two well known types of mathematical modeling in the hub-location problems which are solved with different branch and cut strategies. The strength and weakness of the formulations are summarized based on an extensive numerical study over the CAB data set

    Convexification of Queueing Formulas by Mixed-Integer Second-Order Cone Programming: An Application to a Discrete Location Problem with Congestion

    Full text link
    Mixed-Integer Second-Order Cone Programs (MISOCPs) form a nice class of mixed-inter convex programs, which can be solved very efficiently due to the recent advances in optimization solvers. Our paper bridges the gap between modeling a class of optimization problems and using MISOCP solvers. It is shown how various performance metrics of M/G/1 queues can be molded by different MISOCPs. To motivate our method practically, it is first applied to a challenging stochastic location problem with congestion, which is broadly used to design socially optimal service networks. Four different MISOCPs are developed and compared on sets of benchmark test problems. The new formulations efficiently solve large-size test problems, which cannot be solved by the best existing method. Then, the general applicability of our method is shown for similar optimization problems that use queue-theoretic performance measures to address customer satisfaction and service quality

    A Mean-Risk Mixed Integer Nonlinear Program for Network Protection

    Get PDF
    Many of the infrastructure sectors that are considered to be crucial by the Department of Homeland Security include networked systems (physical and temporal) that function to move some commodity like electricity, people, or even communication from one location of importance to another. The costs associated with these flows make up the price of the network\u27s normal functionality. These networks have limited capacities, which cause the marginal cost of a unit of flow across an edge to increase as congestion builds. In order to limit the expense of a network\u27s normal demand we aim to increase the resilience of the system and specifically the resilience of the arc capacities. Divisions of critical infrastructure have faced difficulties in recent years as inadequate resources have been available for needed upgrades and repairs. Without being able to determine future factors that cause damage both minor and extreme to the networks, officials must decide how to best allocate the limited funds now so that these essential systems can withstand the heavy weight of society\u27s reliance. We model these resource allocation decisions using a two-stage stochastic program (SP) for the purpose of network protection. Starting with a general form for a basic two-stage SP, we enforce assumptions that specify characteristics key to this type of decision model. The second stage objective---which represents the price of the network\u27s routine functionality---is nonlinear, as it reflects the increasing marginal cost per unit of additional flow across an arc. After the model has been designed properly to reflect the network protection problem, we are left with a nonconvex, nonlinear, nonseparable risk-neutral program. This research focuses on key reformulation techniques that transform the problematic model into one that is convex, separable, and much more solvable. Our approach focuses on using perspective functions to convexify the feasibility set of the second stage and second order conic constraints to represent nonlinear constraints in a form that better allows the use of computational solvers. Once these methods have been applied to the risk-neutral model we introduce a risk measure into the first stage that allows us to control the balance between an efficient, solvable model and the need to hedge against extreme events. Using Benders cuts that exploit linear separability, we give a decomposition and solution algorithm for the general network model. The innovations included in this formulation are then implemented on a transportation network with given flow demand
    corecore