Submodular optimization generalizes many classic problems in combinatorial
optimization and has recently found a wide range of applications in machine
learning (e.g., feature engineering and active learning). For many large-scale
optimization problems, we are often concerned with the adaptivity complexity of
an algorithm, which quantifies the number of sequential rounds where
polynomially-many independent function evaluations can be executed in parallel.
While low adaptivity is ideal, it is not sufficient for a distributed algorithm
to be efficient, since in many practical applications of submodular
optimization the number of function evaluations becomes prohibitively
expensive. Motivated by these applications, we study the adaptivity and query
complexity of adaptive submodular optimization.
Our main result is a distributed algorithm for maximizing a monotone
submodular function with cardinality constraint k that achieves a
(1β1/eβΞ΅)-approximation in expectation. This algorithm runs in
O(log(n)) adaptive rounds and makes O(n) calls to the function evaluation
oracle in expectation. The approximation guarantee and query complexity are
optimal, and the adaptivity is nearly optimal. Moreover, the number of queries
is substantially less than in previous works. Last, we extend our results to
the submodular cover problem to demonstrate the generality of our algorithm and
techniques.Comment: 30 pages, Proceedings of the Thirtieth Annual ACM-SIAM Symposium on
Discrete Algorithms (SODA 2019