3,323 research outputs found

    Randomized Strategies for Robust Combinatorial Optimization

    Full text link
    In this paper, we study the following robust optimization problem. Given an independence system and candidate objective functions, we choose an independent set, and then an adversary chooses one objective function, knowing our choice. Our goal is to find a randomized strategy (i.e., a probability distribution over the independent sets) that maximizes the expected objective value. To solve the problem, we propose two types of schemes for designing approximation algorithms. One scheme is for the case when objective functions are linear. It first finds an approximately optimal aggregated strategy and then retrieves a desired solution with little loss of the objective value. The approximation ratio depends on a relaxation of an independence system polytope. As applications, we provide approximation algorithms for a knapsack constraint or a matroid intersection by developing appropriate relaxations and retrievals. The other scheme is based on the multiplicative weights update method. A key technique is to introduce a new concept called (η,γ)(\eta,\gamma)-reductions for objective functions with parameters η,γ\eta, \gamma. We show that our scheme outputs a nearly α\alpha-approximate solution if there exists an α\alpha-approximation algorithm for a subproblem defined by (η,γ)(\eta,\gamma)-reductions. This improves approximation ratio in previous results. Using our result, we provide approximation algorithms when the objective functions are submodular or correspond to the cardinality robustness for the knapsack problem

    Algorithm Engineering in Robust Optimization

    Full text link
    Robust optimization is a young and emerging field of research having received a considerable increase of interest over the last decade. In this paper, we argue that the the algorithm engineering methodology fits very well to the field of robust optimization and yields a rewarding new perspective on both the current state of research and open research directions. To this end we go through the algorithm engineering cycle of design and analysis of concepts, development and implementation of algorithms, and theoretical and experimental evaluation. We show that many ideas of algorithm engineering have already been applied in publications on robust optimization. Most work on robust optimization is devoted to analysis of the concepts and the development of algorithms, some papers deal with the evaluation of a particular concept in case studies, and work on comparison of concepts just starts. What is still a drawback in many papers on robustness is the missing link to include the results of the experiments again in the design

    Robust and MaxMin Optimization under Matroid and Knapsack Uncertainty Sets

    Full text link
    Consider the following problem: given a set system (U,I) and an edge-weighted graph G = (U, E) on the same universe U, find the set A in I such that the Steiner tree cost with terminals A is as large as possible: "which set in I is the most difficult to connect up?" This is an example of a max-min problem: find the set A in I such that the value of some minimization (covering) problem is as large as possible. In this paper, we show that for certain covering problems which admit good deterministic online algorithms, we can give good algorithms for max-min optimization when the set system I is given by a p-system or q-knapsacks or both. This result is similar to results for constrained maximization of submodular functions. Although many natural covering problems are not even approximately submodular, we show that one can use properties of the online algorithm as a surrogate for submodularity. Moreover, we give stronger connections between max-min optimization and two-stage robust optimization, and hence give improved algorithms for robust versions of various covering problems, for cases where the uncertainty sets are given by p-systems and q-knapsacks.Comment: 17 pages. Preliminary version combining this paper and http://arxiv.org/abs/0912.1045 appeared in ICALP 201

    Complexity results and exact algorithms for robust knapsack problems.

    Get PDF
    This paper studies the robust knapsack problem, for which solutions are, up to a certain point, immune to data uncertainty. We complement the works found in the literature where uncertainty affects only the profits or only the weights of the items by studying the complexity and approximation of the general setting with uncertainty regarding both the profits and the weights, for three different objective functions. Furthermore, we develop a scenario-relaxation algorithm for solving the general problem and present computational results.Knapsack problem; Robustness; Scenario-relaxation algorithm; NP-hard; Approximation;

    Adversarially Robust Submodular Maximization under Knapsack Constraints

    Full text link
    We propose the first adversarially robust algorithm for monotone submodular maximization under single and multiple knapsack constraints with scalable implementations in distributed and streaming settings. For a single knapsack constraint, our algorithm outputs a robust summary of almost optimal (up to polylogarithmic factors) size, from which a constant-factor approximation to the optimal solution can be constructed. For multiple knapsack constraints, our approximation is within a constant-factor of the best known non-robust solution. We evaluate the performance of our algorithms by comparison to natural robustifications of existing non-robust algorithms under two objectives: 1) dominating set for large social network graphs from Facebook and Twitter collected by the Stanford Network Analysis Project (SNAP), 2) movie recommendations on a dataset from MovieLens. Experimental results show that our algorithms give the best objective for a majority of the inputs and show strong performance even compared to offline algorithms that are given the set of removals in advance.Comment: To appear in KDD 201
    corecore