467 research outputs found

    Non-Smooth, H\"older-Smooth, and Robust Submodular Maximization

    Full text link
    We study the problem of maximizing a continuous DR-submodular function that is not necessarily smooth. We prove that the continuous greedy algorithm achieves an [(1-1/e)\OPT-\epsilon] guarantee when the function is monotone and H\"older-smooth, meaning that it admits a H\"older-continuous gradient. For functions that are non-differentiable or non-smooth, we propose a variant of the mirror-prox algorithm that attains an [(1/2)\OPT-\epsilon] guarantee. We apply our algorithmic frameworks to robust submodular maximization and distributionally robust submodular maximization under Wasserstein ambiguity. In particular, the mirror-prox method applies to robust submodular maximization to obtain a single feasible solution whose value is at least (1/2)\OPT-\epsilon. For distributionally robust maximization under Wasserstein ambiguity, we deduce and work over a submodular-convex maximin reformulation whose objective function is H\"older-smooth, for which we may apply both the continuous greedy and the mirror-prox algorithms

    Submodular Maximization with Matroid and Packing Constraints in Parallel

    Full text link
    We consider the problem of maximizing the multilinear extension of a submodular function subject a single matroid constraint or multiple packing constraints with a small number of adaptive rounds of evaluation queries. We obtain the first algorithms with low adaptivity for submodular maximization with a matroid constraint. Our algorithms achieve a 11/eϵ1-1/e-\epsilon approximation for monotone functions and a 1/eϵ1/e-\epsilon approximation for non-monotone functions, which nearly matches the best guarantees known in the fully adaptive setting. The number of rounds of adaptivity is O(log2n/ϵ3)O(\log^2{n}/\epsilon^3), which is an exponential speedup over the existing algorithms. We obtain the first parallel algorithm for non-monotone submodular maximization subject to packing constraints. Our algorithm achieves a 1/eϵ1/e-\epsilon approximation using O(log(n/ϵ)log(1/ϵ)log(n+m)/ϵ2)O(\log(n/\epsilon) \log(1/\epsilon) \log(n+m)/ \epsilon^2) parallel rounds, which is again an exponential speedup in parallel time over the existing algorithms. For monotone functions, we obtain a 11/eϵ1-1/e-\epsilon approximation in O(log(n/ϵ)log(m)/ϵ2)O(\log(n/\epsilon)\log(m)/\epsilon^2) parallel rounds. The number of parallel rounds of our algorithm matches that of the state of the art algorithm for solving packing LPs with a linear objective. Our results apply more generally to the problem of maximizing a diminishing returns submodular (DR-submodular) function
    corecore