1,966 research outputs found

    Online Multistage Subset Maximization Problems

    Get PDF
    Numerous combinatorial optimization problems (knapsack, maximum-weight matching, etc.) can be expressed as subset maximization problems: One is given a ground set N={1,...,n}, a collection F subseteq 2^N of subsets thereof such that the empty set is in F, and an objective (profit) function p: F -> R_+. The task is to choose a set S in F that maximizes p(S). We consider the multistage version (Eisenstat et al., Gupta et al., both ICALP 2014) of such problems: The profit function p_t (and possibly the set of feasible solutions F_t) may change over time. Since in many applications changing the solution is costly, the task becomes to find a sequence of solutions that optimizes the trade-off between good per-time solutions and stable solutions taking into account an additional similarity bonus. As similarity measure for two consecutive solutions, we consider either the size of the intersection of the two solutions or the difference of n and the Hamming distance between the two characteristic vectors. We study multistage subset maximization problems in the online setting, that is, p_t (along with possibly F_t) only arrive one by one and, upon such an arrival, the online algorithm has to output the corresponding solution without knowledge of the future. We develop general techniques for online multistage subset maximization and thereby characterize those models (given by the type of data evolution and the type of similarity measure) that admit a constant-competitive online algorithm. When no constant competitive ratio is possible, we employ lookahead to circumvent this issue. When a constant competitive ratio is possible, we provide almost matching lower and upper bounds on the best achievable one

    A Practical Guide to Robust Optimization

    Get PDF
    Robust optimization is a young and active research field that has been mainly developed in the last 15 years. Robust optimization is very useful for practice, since it is tailored to the information at hand, and it leads to computationally tractable formulations. It is therefore remarkable that real-life applications of robust optimization are still lagging behind; there is much more potential for real-life applications than has been exploited hitherto. The aim of this paper is to help practitioners to understand robust optimization and to successfully apply it in practice. We provide a brief introduction to robust optimization, and also describe important do's and don'ts for using it in practice. We use many small examples to illustrate our discussions

    Changing Bases: Multistage Optimization for Matroids and Matchings

    Full text link
    This paper is motivated by the fact that many systems need to be maintained continually while the underlying costs change over time. The challenge is to continually maintain near-optimal solutions to the underlying optimization problems, without creating too much churn in the solution itself. We model this as a multistage combinatorial optimization problem where the input is a sequence of cost functions (one for each time step); while we can change the solution from step to step, we incur an additional cost for every such change. We study the multistage matroid maintenance problem, where we need to maintain a base of a matroid in each time step under the changing cost functions and acquisition costs for adding new elements. The online version of this problem generalizes online paging. E.g., given a graph, we need to maintain a spanning tree TtT_t at each step: we pay ct(Tt)c_t(T_t) for the cost of the tree at time tt, and also TtTt1| T_t\setminus T_{t-1} | for the number of edges changed at this step. Our main result is an O(logmlogr)O(\log m \log r)-approximation, where mm is the number of elements/edges and rr is the rank of the matroid. We also give an O(logm)O(\log m) approximation for the offline version of the problem. These bounds hold when the acquisition costs are non-uniform, in which caseboth these results are the best possible unless P=NP. We also study the perfect matching version of the problem, where we must maintain a perfect matching at each step under changing cost functions and costs for adding new elements. Surprisingly, the hardness drastically increases: for any constant ϵ>0\epsilon>0, there is no O(n1ϵ)O(n^{1-\epsilon})-approximation to the multistage matching maintenance problem, even in the offline case

    K-Adaptability in Two-Stage Distributionally Robust Binary Programming

    Get PDF
    We propose to approximate two-stage distributionally robust programs with binary recourse decisions by their associated K-adaptability problems, which pre-select K candidate secondstage policies here-and-now and implement the best of these policies once the uncertain parameters have been observed. We analyze the approximation quality and the computational complexity of the K-adaptability problem, and we derive explicit mixed-integer linear programming reformulations. We also provide efficient procedures for bounding the probabilities with which each of the K second-stage policies is selected
    corecore