64,532 research outputs found

    Approximation Algorithms for 2-Stage Stochastic Optimization Problems

    Full text link
    Abstract. Stochastic optimization is a leading approach to model optimization problems in which there is uncertainty in the input data, whether from measurement noise or an inability to know the future. In this survey, we outline some recent progress in the design of polynomialtime algorithms with performance guarantees on the quality of the solutions found for an important class of stochastic programming problems — 2-stage problems with recourse. In particular, we show that for a number of concrete problems, algorithmic approaches that have been applied for their deterministic analogues are also effective in this more challenging domain. More specifically, this work highlights the role of tools from linear programming, rounding techniques, primal-dual algorithms, and the role of randomization more generally.

    Approximation Algorithms for Distributionally Robust Stochastic Optimization

    Get PDF
    Two-stage stochastic optimization is a widely used framework for modeling uncertainty, where we have a probability distribution over possible realizations of the data, called scenarios, and decisions are taken in two stages: we take first-stage actions knowing only the underlying distribution and before a scenario is realized, and may take additional second-stage recourse actions after a scenario is realized. The goal is typically to minimize the total expected cost. A common criticism levied at this model is that the underlying probability distribution is itself often imprecise. To address this, an approach that is quite versatile and has gained popularity in the stochastic-optimization literature is the two-stage distributionally robust stochastic model: given a collection D of probability distributions, our goal now is to minimize the maximum expected total cost with respect to a distribution in D. There has been almost no prior work however on developing approximation algorithms for distributionally robust problems where the underlying scenario collection is discrete, as is the case with discrete-optimization problems. We provide frameworks for designing approximation algorithms in such settings when the collection D is a ball around a central distribution, defined relative to two notions of distance between probability distributions: Wasserstein metrics (which include the L_1 metric) and the L_infinity metric. Our frameworks yield efficient algorithms even in settings with an exponential number of scenarios, where the central distribution may only be accessed via a sampling oracle. For distributionally robust optimization under a Wasserstein ball, we first show that one can utilize the sample average approximation (SAA) method (solve the distributionally robust problem with an empirical estimate of the central distribution) to reduce the problem to the case where the central distribution has a polynomial-size support, and is represented explicitly. This follows because we argue that a distributionally robust problem can be reduced in a novel way to a standard two-stage stochastic problem with bounded inflation factor, which enables one to use the SAA machinery developed for two-stage stochastic problems. Complementing this, we show how to approximately solve a fractional relaxation of the SAA problem (i.e., the distributionally robust problem obtained by replacing the original central distribution with its empirical estimate). Unlike in two-stage {stochastic, robust} optimization with polynomially many scenarios, this turns out to be quite challenging. We utilize a variant of the ellipsoid method for convex optimization in conjunction with several new ideas to show that the SAA problem can be approximately solved provided that we have an (approximation) algorithm for a certain max-min problem that is akin to, and generalizes, the k-max-min problem (find the worst-case scenario consisting of at most k elements) encountered in two-stage robust optimization. We obtain such an algorithm for various discrete-optimization problems; by complementing this via rounding algorithms that provide local (i.e., per-scenario) approximation guarantees, we obtain the first approximation algorithms for the distributionally robust versions of a variety of discrete-optimization problems including set cover, vertex cover, edge cover, facility location, and Steiner tree, with guarantees that are, except for set cover, within O(1)-factors of the guarantees known for the deterministic version of the problem. For distributionally robust optimization under an L_infinity ball, we consider a fractional relaxation of the problem, and replace its objective function with a proxy function that is pointwise close to the true objective function (within a factor of 2). We then show that we can efficiently compute approximate subgradients of the proxy function, provided that we have an algorithm for the problem of computing the t worst scenarios under a given first-stage decision, given an integer t. We can then approximately minimize the proxy function via a variant of the ellipsoid method, and thus obtain an approximate solution for the fractional relaxation of the distributionally robust problem. Complementing this via rounding algorithms with local guarantees, we obtain approximation algorithms for distributionally robust versions of various covering problems, including set cover, vertex cover, edge cover, and facility location, with guarantees that are within O(1)-factors of the guarantees known for their deterministic versions

    Approximation Algorithms for Distributionally Robust Stochastic Optimization with Black-Box Distributions

    Full text link
    Two-stage stochastic optimization is a framework for modeling uncertainty, where we have a probability distribution over possible realizations of the data, called scenarios, and decisions are taken in two stages: we make first-stage decisions knowing only the underlying distribution and before a scenario is realized, and may take additional second-stage recourse actions after a scenario is realized. The goal is typically to minimize the total expected cost. A criticism of this model is that the underlying probability distribution is itself often imprecise! To address this, a versatile approach that has been proposed is the {\em distributionally robust 2-stage model}: given a collection of probability distributions, our goal now is to minimize the maximum expected total cost with respect to a distribution in this collection. We provide a framework for designing approximation algorithms in such settings when the collection is a ball around a central distribution and the central distribution is accessed {\em only via a sampling black box}. We first show that one can utilize the {\em sample average approximation} (SAA) method to reduce the problem to the case where the central distribution has {\em polynomial-size} support. We then show how to approximately solve a fractional relaxation of the SAA (i.e., polynomial-scenario central-distribution) problem. By complementing this via LP-rounding algorithms that provide {\em local} (i.e., per-scenario) approximation guarantees, we obtain the {\em first} approximation algorithms for the distributionally robust versions of a variety of discrete-optimization problems including set cover, vertex cover, edge cover, facility location, and Steiner tree, with guarantees that are, except for set cover, within O(1)O(1)-factors of the guarantees known for the deterministic version of the problem

    Towards Distributed Two-Stage Stochastic Optimization

    Get PDF
    The weighted vertex cover problem is concerned with selecting a subset of the vertices that covers a target set of edges with the objective of minimizing the total cost of the selected vertices. We consider a variant of this classic combinatorial optimization problem where the target edge set is not fully known; rather, it is characterized by a probability distribution. Adhering to the model of two-stage stochastic optimization, the execution is divided into two stages so that in the first stage, the decision maker selects some of the vertices based on the probabilistic forecast of the target edge set. Then, in the second stage, the edges in the target set are revealed and in order to cover them, the decision maker can augment the vertex subset selected in the first stage with additional vertices. However, in the second stage, the vertex cost increases by some inflation factor, so the second stage selection becomes more expensive. The current paper studies the two-stage stochastic vertex cover problem in the realm of distributed graph algorithms, where the decision making process (in both stages) is distributed among the vertices of the graph. By combining the stochastic optimization toolbox with recent advances in distributed algorithms for weighted vertex cover, we develop an algorithm that runs in time O(log (?) / ?), sends O(m) messages in total, and guarantees to approximate the optimal solution within a (3 + ?)-ratio, where m is the number of edges in the graph, ? is its maximum degree, and 0 < ? < 1 is a performance parameter

    Approximation algorithms for stochastic and risk-averse optimization

    Full text link
    We present improved approximation algorithms in stochastic optimization. We prove that the multi-stage stochastic versions of covering integer programs (such as set cover and vertex cover) admit essentially the same approximation algorithms as their standard (non-stochastic) counterparts; this improves upon work of Swamy \& Shmoys which shows an approximability that depends multiplicatively on the number of stages. We also present approximation algorithms for facility location and some of its variants in the 22-stage recourse model, improving on previous approximation guarantees. We give a 2.29752.2975-approximation algorithm in the standard polynomial-scenario model and an algorithm with an expected per-scenario 2.49572.4957-approximation guarantee, which is applicable to the more general black-box distribution model.Comment: Extension of a SODA'07 paper. To appear in SIAM J. Discrete Mat

    Prepare for the Expected Worst: Algorithms for Reconfigurable Resources Under Uncertainty

    Get PDF
    In this paper we study how to optimally balance cheap inflexible resources with more expensive, reconfigurable resources despite uncertainty in the input problem. Specifically, we introduce the MinEMax model to study "build versus rent" problems. In our model different scenarios appear independently. Before knowing which scenarios appear, we may build rigid resources that cannot be changed for different scenarios. Once we know which scenarios appear, we are allowed to rent reconfigurable but expensive resources to use across scenarios. Although computing the objective in our model might seem to require enumerating exponentially-many possibilities, we show it is well estimated by a surrogate objective which is representable by a polynomial-size LP. In this surrogate objective we pay for each scenario only to the extent that it exceeds a certain threshold. Using this objective we design algorithms that approximately-optimally balance inflexible and reconfigurable resources for several NP-hard covering problems. For example, we study variants of minimum spanning and Steiner trees, minimum cuts, and facility location. Up to constants, our approximation guarantees match those of previously-studied algorithms for demand-robust and stochastic two-stage models. Lastly, we demonstrate that our problem is sufficiently general to smoothly interpolate between previous demand-robust and stochastic two-stage problems
    • …
    corecore