27 research outputs found
Non-monotone Submodular Maximization with Nearly Optimal Adaptivity and Query Complexity
Submodular maximization is a general optimization problem with a wide range
of applications in machine learning (e.g., active learning, clustering, and
feature selection). In large-scale optimization, the parallel running time of
an algorithm is governed by its adaptivity, which measures the number of
sequential rounds needed if the algorithm can execute polynomially-many
independent oracle queries in parallel. While low adaptivity is ideal, it is
not sufficient for an algorithm to be efficient in practice---there are many
applications of distributed submodular optimization where the number of
function evaluations becomes prohibitively expensive. Motivated by these
applications, we study the adaptivity and query complexity of submodular
maximization. In this paper, we give the first constant-factor approximation
algorithm for maximizing a non-monotone submodular function subject to a
cardinality constraint that runs in adaptive rounds and makes
oracle queries in expectation. In our empirical study, we use
three real-world applications to compare our algorithm with several benchmarks
for non-monotone submodular maximization. The results demonstrate that our
algorithm finds competitive solutions using significantly fewer rounds and
queries.Comment: 12 pages, 8 figure
Nearly Linear-Time, Parallelizable Algorithms for Non-Monotone Submodular Maximization
We study parallelizable algorithms for maximization of a submodular function,
not necessarily monotone, with respect to a cardinality constraint . We
improve the best approximation factor achieved by an algorithm that has optimal
adaptivity and query complexity, up to logarithmic factors in the size of
the ground set, from to . We provide two
algorithms; the first has approximation ratio , adaptivity , and query complexity , while the second has
approximation ratio , adaptivity , and query
complexity . Heuristic versions of our algorithms are empirically
validated to use a low number of adaptive rounds and total queries while
obtaining solutions with high objective value in comparison with highly
adaptive approximation algorithms.Comment: 24 pages, 2 figure
Submodular Maximization with Matroid and Packing Constraints in Parallel
We consider the problem of maximizing the multilinear extension of a
submodular function subject a single matroid constraint or multiple packing
constraints with a small number of adaptive rounds of evaluation queries.
We obtain the first algorithms with low adaptivity for submodular
maximization with a matroid constraint. Our algorithms achieve a
approximation for monotone functions and a
approximation for non-monotone functions, which nearly matches the best
guarantees known in the fully adaptive setting. The number of rounds of
adaptivity is , which is an exponential speedup over
the existing algorithms.
We obtain the first parallel algorithm for non-monotone submodular
maximization subject to packing constraints. Our algorithm achieves a
approximation using parallel rounds, which is again an exponential speedup
in parallel time over the existing algorithms. For monotone functions, we
obtain a approximation in
parallel rounds. The number of parallel
rounds of our algorithm matches that of the state of the art algorithm for
solving packing LPs with a linear objective.
Our results apply more generally to the problem of maximizing a diminishing
returns submodular (DR-submodular) function