27 research outputs found

    Convexification of Queueing Formulas by Mixed-Integer Second-Order Cone Programming: An Application to a Discrete Location Problem with Congestion

    Full text link
    Mixed-Integer Second-Order Cone Programs (MISOCPs) form a nice class of mixed-inter convex programs, which can be solved very efficiently due to the recent advances in optimization solvers. Our paper bridges the gap between modeling a class of optimization problems and using MISOCP solvers. It is shown how various performance metrics of M/G/1 queues can be molded by different MISOCPs. To motivate our method practically, it is first applied to a challenging stochastic location problem with congestion, which is broadly used to design socially optimal service networks. Four different MISOCPs are developed and compared on sets of benchmark test problems. The new formulations efficiently solve large-size test problems, which cannot be solved by the best existing method. Then, the general applicability of our method is shown for similar optimization problems that use queue-theoretic performance measures to address customer satisfaction and service quality

    Prescriptive PCA: Dimensionality Reduction for Two-stage Stochastic Optimization

    Full text link
    In this paper, we consider the alignment between an upstream dimensionality reduction task of learning a low-dimensional representation of a set of high-dimensional data and a downstream optimization task of solving a stochastic program parameterized by said representation. In this case, standard dimensionality reduction methods (e.g., principal component analysis) may not perform well, as they aim to maximize the amount of information retained in the representation and do not generally reflect the importance of such information in the downstream optimization problem. To address this problem, we develop a prescriptive dimensionality reduction framework that aims to minimize the degree of suboptimality in the optimization phase. For the case where the downstream stochastic optimization problem has an expected value objective, we show that prescriptive dimensionality reduction can be performed via solving a distributionally-robust optimization problem, which admits a semidefinite programming relaxation. Computational experiments based on a warehouse transshipment problem and a vehicle repositioning problem show that our approach significantly outperforms principal component analysis with real and synthetic data sets

    Quality vs. Quantity of Data in Contextual Decision-Making: Exact Analysis under Newsvendor Loss

    Full text link
    When building datasets, one needs to invest time, money and energy to either aggregate more data or to improve their quality. The most common practice favors quantity over quality without necessarily quantifying the trade-off that emerges. In this work, we study data-driven contextual decision-making and the performance implications of quality and quantity of data. We focus on contextual decision-making with a Newsvendor loss. This loss is that of a central capacity planning problem in Operations Research, but also that associated with quantile regression. We consider a model in which outcomes observed in similar contexts have similar distributions and analyze the performance of a classical class of kernel policies which weigh data according to their similarity in a contextual space. We develop a series of results that lead to an exact characterization of the worst-case expected regret of these policies. This exact characterization applies to any sample size and any observed contexts. The model we develop is flexible, and captures the case of partially observed contexts. This exact analysis enables to unveil new structural insights on the learning behavior of uniform kernel methods: i) the specialized analysis leads to very large improvements in quantification of performance compared to state of the art general purpose bounds. ii) we show an important non-monotonicity of the performance as a function of data size not captured by previous bounds; and iii) we show that in some regimes, a little increase in the quality of the data can dramatically reduce the amount of samples required to reach a performance target. All in all, our work demonstrates that it is possible to quantify in a precise fashion the interplay of data quality and quantity, and performance in a central problem class. It also highlights the need for problem specific bounds in order to understand the trade-offs at play

    A Machine Learning Approach to Two-Stage Adaptive Robust Optimization

    Full text link
    We propose an approach based on machine learning to solve two-stage linear adaptive robust optimization (ARO) problems with binary here-and-now variables and polyhedral uncertainty sets. We encode the optimal here-and-now decisions, the worst-case scenarios associated with the optimal here-and-now decisions, and the optimal wait-and-see decisions into what we denote as the strategy. We solve multiple similar ARO instances in advance using the column and constraint generation algorithm and extract the optimal strategies to generate a training set. We train a machine learning model that predicts high-quality strategies for the here-and-now decisions, the worst-case scenarios associated with the optimal here-and-now decisions, and the wait-and-see decisions. We also introduce an algorithm to reduce the number of different target classes the machine learning algorithm needs to be trained on. We apply the proposed approach to the facility location, the multi-item inventory control and the unit commitment problems. Our approach solves ARO problems drastically faster than the state-of-the-art algorithms with high accuracy

    A Robust Optimization of Capacity Allocation Policies in the Third-Party Warehouse

    Get PDF
    We study the capacity allocation policies of a third-party warehouse center, which supplies several different level services on different prices with fixed capacity, on revenue management perspective. For the single period situation, we use three different robust methods, absolute robust, deviation robust, and relative robust method, to maximize the whole revenue. Then we give some numerical examples to verify the practical applicability. For the multiperiod situation, as the demand is uncertain, we propose a stochastic model for the multiperiod revenue management problem of the warehouse. A novel robust optimization technique is applied in this model to maximize the whole revenue. Then we give some numerical examples to verify the practical applicability of our method
    corecore