323 research outputs found

    Prepare for the Expected Worst: Algorithms for Reconfigurable Resources Under Uncertainty

    Get PDF
    In this paper we study how to optimally balance cheap inflexible resources with more expensive, reconfigurable resources despite uncertainty in the input problem. Specifically, we introduce the MinEMax model to study "build versus rent" problems. In our model different scenarios appear independently. Before knowing which scenarios appear, we may build rigid resources that cannot be changed for different scenarios. Once we know which scenarios appear, we are allowed to rent reconfigurable but expensive resources to use across scenarios. Although computing the objective in our model might seem to require enumerating exponentially-many possibilities, we show it is well estimated by a surrogate objective which is representable by a polynomial-size LP. In this surrogate objective we pay for each scenario only to the extent that it exceeds a certain threshold. Using this objective we design algorithms that approximately-optimally balance inflexible and reconfigurable resources for several NP-hard covering problems. For example, we study variants of minimum spanning and Steiner trees, minimum cuts, and facility location. Up to constants, our approximation guarantees match those of previously-studied algorithms for demand-robust and stochastic two-stage models. Lastly, we demonstrate that our problem is sufficiently general to smoothly interpolate between previous demand-robust and stochastic two-stage problems

    Regret Models and Preprocessing Techniques for Combinatorial Optimization under Uncertainty

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    A study of distributionally robust mixed-integer programming with Wasserstein metric: on the value of incomplete data

    Full text link
    This study addresses a class of linear mixed-integer programming (MILP) problems that involve uncertainty in the objective function parameters. The parameters are assumed to form a random vector, whose probability distribution can only be observed through a finite training data set. Unlike most of the related studies in the literature, we also consider uncertainty in the underlying data set. The data uncertainty is described by a set of linear constraints for each random sample, and the uncertainty in the distribution (for a fixed realization of data) is defined using a type-1 Wasserstein ball centered at the empirical distribution of the data. The overall problem is formulated as a three-level distributionally robust optimization (DRO) problem. First, we prove that the three-level problem admits a single-level MILP reformulation, if the class of loss functions is restricted to biaffine functions. Secondly, it turns out that for several particular forms of data uncertainty, the outlined problem can be solved reasonably fast by leveraging the nominal MILP problem. Finally, we conduct a computational study, where the out-of-sample performance of our model and computational complexity of the proposed MILP reformulation are explored numerically for several application domains

    Target-based Distributionally Robust Minimum Spanning Tree Problem

    Full text link
    Due to its broad applications in practice, the minimum spanning tree problem and its all kinds of variations have been studied extensively during the last decades, for which a host of efficient exact and heuristic algorithms have been proposed. Meanwhile, motivated by realistic applications, the minimum spanning tree problem in stochastic network has attracted considerable attention of researchers, with respect to which stochastic and robust spanning tree models and related algorithms have been continuingly developed. However, all of them would be either too restricted by the types of the edge weight random variables or computationally intractable, especially in large-scale networks. In this paper, we introduce a target-based distributionally robust optimization framework to solve the minimum spanning tree problem in stochastic graphs where the probability distribution function of the edge weight is unknown but some statistical information could be utilized to prevent the optimal solution from being too conservative. We propose two exact algorithms to solve it, based on Benders decomposition framework and a modified classical greedy algorithm of MST problem (Prim algorithm),respectively. Compared with the NP-hard stochastic and robust spanning tree problems,The proposed target-based distributionally robust minimum spanning tree problem enjoys more satisfactory algorithmic aspect and robustness, when faced with uncertainty in input data

    Data-driven Distributionally Robust Optimization over Time

    Full text link
    Stochastic Optimization (SO) is a classical approach for optimization under uncertainty that typically requires knowledge about the probability distribution of uncertain parameters. As the latter is often unknown, Distributionally Robust Optimization (DRO) provides a strong alternative that determines the best guaranteed solution over a set of distributions (ambiguity set). In this work, we present an approach for DRO over time that uses online learning and scenario observations arriving as a data stream to learn more about the uncertainty. Our robust solutions adapt over time and reduce the cost of protection with shrinking ambiguity. For various kinds of ambiguity sets, the robust solutions converge to the SO solution. Our algorithm achieves the optimization and learning goals without solving the DRO problem exactly at any step. We also provide a regret bound for the quality of the online strategy which converges at a rate of O(logT/T)\mathcal{O}(\log T / \sqrt{T}), where TT is the number of iterations. Furthermore, we illustrate the effectiveness of our procedure by numerical experiments on mixed-integer optimization instances from popular benchmark libraries and give practical examples stemming from telecommunications and routing. Our algorithm is able to solve the DRO over time problem significantly faster than standard reformulations
    corecore