153 research outputs found
Submodular Maximization with Matroid and Packing Constraints in Parallel
We consider the problem of maximizing the multilinear extension of a
submodular function subject a single matroid constraint or multiple packing
constraints with a small number of adaptive rounds of evaluation queries.
We obtain the first algorithms with low adaptivity for submodular
maximization with a matroid constraint. Our algorithms achieve a
approximation for monotone functions and a
approximation for non-monotone functions, which nearly matches the best
guarantees known in the fully adaptive setting. The number of rounds of
adaptivity is , which is an exponential speedup over
the existing algorithms.
We obtain the first parallel algorithm for non-monotone submodular
maximization subject to packing constraints. Our algorithm achieves a
approximation using parallel rounds, which is again an exponential speedup
in parallel time over the existing algorithms. For monotone functions, we
obtain a approximation in
parallel rounds. The number of parallel
rounds of our algorithm matches that of the state of the art algorithm for
solving packing LPs with a linear objective.
Our results apply more generally to the problem of maximizing a diminishing
returns submodular (DR-submodular) function
Fast Adaptive Non-Monotone Submodular Maximization Subject to a Knapsack Constraint
Constrained submodular maximization problems encompass a wide variety of applications, including personalized recommendation, team formation, and revenue maximization via viral marketing. The massive instances occurring in modern-day applications can render existing algorithms prohibitively slow. Moreover, frequently those instances are also inherently stochastic. Focusing on these challenges, we revisit the classic problem of maximizing a (possibly non-monotone) submodular function subject to a knapsack constraint. We present a simple randomized greedy algorithm that achieves a 5.83 approximation and runs in O(n log n) time, i.e., at least a factor
n
faster than other state-of-the-art algorithms. The robustness of our approach allows us to further transfer it to a stochastic version of the problem. There, we obtain a 9-approximation to the best adaptive policy, which is the first constant approximation for non-monotone objectives. Experimental evaluation of our algorithms showcases their improved performance on real and synthetic data
- …