119 research outputs found

    A Relaxed FPTAS for Chance-Constrained Knapsack

    Get PDF
    The stochastic knapsack problem is a stochastic version of the well known deterministic knapsack problem, in which some of the input values are random variables. There are several variants of the stochastic problem. In this paper we concentrate on the chance-constrained variant, where item values are deterministic and item sizes are stochastic. The goal is to find a maximum value allocation subject to the constraint that the overflow probability is at most a given value. Previous work showed a PTAS for the problem for various distributions (Poisson, Exponential, Bernoulli and Normal). Some strictly respect the constraint and some relax the constraint by a factor of (1+epsilon). All algorithms use Omega(n^{1/epsilon}) time. A very recent work showed a "almost FPTAS" algorithm for Bernoulli distributions with O(poly(n) * quasipoly(1/epsilon)) time. In this paper we present a FPTAS for normal distributions with a solution that satisfies the chance constraint in a relaxed sense. The normal distribution is particularly important, because by the Berry-Esseen theorem, an algorithm solving the normal distribution also solves, under mild conditions, arbitrary independent distributions. To the best of our knowledge, this is the first (relaxed or non-relaxed) FPTAS for the problem. In fact, our algorithm runs in poly(n/epsilon) time. We achieve the FPTAS by a delicate combination of previous techniques plus a new alternative solution to the non-heavy elements that is based on a non-convex program with a simple structure and an O(n^2 log {n/epsilon}) running time. We believe this part is also interesting on its own right

    Algorithm Engineering in Robust Optimization

    Full text link
    Robust optimization is a young and emerging field of research having received a considerable increase of interest over the last decade. In this paper, we argue that the the algorithm engineering methodology fits very well to the field of robust optimization and yields a rewarding new perspective on both the current state of research and open research directions. To this end we go through the algorithm engineering cycle of design and analysis of concepts, development and implementation of algorithms, and theoretical and experimental evaluation. We show that many ideas of algorithm engineering have already been applied in publications on robust optimization. Most work on robust optimization is devoted to analysis of the concepts and the development of algorithms, some papers deal with the evaluation of a particular concept in case studies, and work on comparison of concepts just starts. What is still a drawback in many papers on robustness is the missing link to include the results of the experiments again in the design

    Distributionally robust views on queues and related stochastic models

    Get PDF
    This dissertation explores distribution-free methods for stochastic models. Traditional approaches operate on the premise of complete knowledge about the probability distributions of the underlying random variables that govern these models. In contrast, this work adopts a distribution-free perspective, assuming only partial knowledge of these distributions, often limited to generalized moment information. Distributionally robust analysis seeks to determine the worst-case model performance. It involves optimization over a set of probability distributions that comply with this partial information, a task tantamount to solving a semiinfinite linear program. To address such an optimization problem, a solution approach based on the concept of weak duality is used. Through the proposed weak-duality argument, distribution-free bounds are derived for a wide range of stochastic models. Further, these bounds are applied to various distributionally robust stochastic programs and used to analyze extremal queueing models—central themes in applied probability and mathematical optimization

    Distributionally robust views on queues and related stochastic models

    Get PDF
    This dissertation explores distribution-free methods for stochastic models. Traditional approaches operate on the premise of complete knowledge about the probability distributions of the underlying random variables that govern these models. In contrast, this work adopts a distribution-free perspective, assuming only partial knowledge of these distributions, often limited to generalized moment information. Distributionally robust analysis seeks to determine the worst-case model performance. It involves optimization over a set of probability distributions that comply with this partial information, a task tantamount to solving a semiinfinite linear program. To address such an optimization problem, a solution approach based on the concept of weak duality is used. Through the proposed weak-duality argument, distribution-free bounds are derived for a wide range of stochastic models. Further, these bounds are applied to various distributionally robust stochastic programs and used to analyze extremal queueing models—central themes in applied probability and mathematical optimization

    Models, algorithms and performance analysis for adaptive operating room scheduling

    Get PDF
    The complex optimisation problems arising in the scheduling of operating rooms have received considerable attention in recent scientific literature because of their impact on costs, revenues and patient health. For an important part, the complexity stems from the stochastic nature of the problem. In practice, this stochastic nature often leads to schedule adaptations on the day of schedule execution. While operating room performance is thus importantly affected by such adaptations, decision-making on adaptations is hardly addressed in scientific literature. Building on previous literature on adaptive scheduling, we develop adaptive operating room scheduling models and problems, and analyse the performance of corresponding adaptive scheduling policies. As previously proposed (fully) adaptive scheduling models and policies are infeasible in operating room scheduling practice, we extend adaptive scheduling theory by introducing the novel concept of committing. Moreover, the core of the proposed adaptive policies with committing is formed by a new, exact, pseudo-polynomial algorithm to solve a general class of stochastic knapsack problems. Using these theoretica

    Decision-Focused Learning: Foundations, State of the Art, Benchmark and Future Opportunities

    Full text link
    Decision-focused learning (DFL) is an emerging paradigm in machine learning which trains a model to optimize decisions, integrating prediction and optimization in an end-to-end system. This paradigm holds the promise to revolutionize decision-making in many real-world applications which operate under uncertainty, where the estimation of unknown parameters within these decision models often becomes a substantial roadblock. This paper presents a comprehensive review of DFL. It provides an in-depth analysis of the various techniques devised to integrate machine learning and optimization models, introduces a taxonomy of DFL methods distinguished by their unique characteristics, and conducts an extensive empirical evaluation of these methods proposing suitable benchmark dataset and tasks for DFL. Finally, the study provides valuable insights into current and potential future avenues in DFL research.Comment: Experimental Survey and Benchmarkin

    Integer Programming Approaches for Some Non-convex and Stochastic Optimization Problems

    Get PDF
    In this dissertation we study several non-convex and stochastic optimization problems. The common theme is the use of mixed-integer programming (MIP) techniques including valid inequalities and reformulation to solve these problems. We first study a strategic capacity planning model which captures the trade-off between the incentive to delay capacity installation to wait for improved technology and the need for some capacity to be installed to meet current demands. This problem is naturally formulated as a MIP with a bilinear objective. We develop several linear MIP formulations, along with classes of strong valid inequalities. We also present a specialized branch-and-cut algorithm to solve a compact concave formulation. Computational results indicate that these formulations can be used to solve large-scale instances. We next study methods for optimization with joint probabilistic constraints. These problems are challenging because evaluating solution feasibility requires multidimensional integration and the feasible region is not convex. We propose and analyze a Monte Carlo sampling scheme to simplify the probabilistic structure of such problems. Computational tests of the approach indicate that it can yield good feasible solutions and reasonable bounds on their quality. Next, we study a MIP formulation of the non-convex sample approximation problem. We obtain two strengthened formulations. As a byproduct of this analysis, we obtain new results for the previously studied mixing set, subject to an additional knapsack inequality. Computational results indicate that large-scale instances can be solved using the strengthened formulations. Finally, we study optimization problems with stochastic dominance constraints. A stochastic dominance constraint states that a random outcome which depends on the decision variables should stochastically dominate a given random variable. We present new formulations for both first and second order stochastic dominance which are significantly more compact than existing formulations. Computational tests illustrate the benefits of the new formulations.Ph.D.Committee Co-Chair: Ahmed, Shabbir; Committee Co-Chair: Nemhauser, George; Committee Member: Cook, Bill; Committee Member: Gu, Zonghao; Committee Member: Parker, R. Gar
    • …
    corecore