84 research outputs found
Non-monotone Submodular Maximization with Nearly Optimal Adaptivity and Query Complexity
Submodular maximization is a general optimization problem with a wide range
of applications in machine learning (e.g., active learning, clustering, and
feature selection). In large-scale optimization, the parallel running time of
an algorithm is governed by its adaptivity, which measures the number of
sequential rounds needed if the algorithm can execute polynomially-many
independent oracle queries in parallel. While low adaptivity is ideal, it is
not sufficient for an algorithm to be efficient in practice---there are many
applications of distributed submodular optimization where the number of
function evaluations becomes prohibitively expensive. Motivated by these
applications, we study the adaptivity and query complexity of submodular
maximization. In this paper, we give the first constant-factor approximation
algorithm for maximizing a non-monotone submodular function subject to a
cardinality constraint that runs in adaptive rounds and makes
oracle queries in expectation. In our empirical study, we use
three real-world applications to compare our algorithm with several benchmarks
for non-monotone submodular maximization. The results demonstrate that our
algorithm finds competitive solutions using significantly fewer rounds and
queries.Comment: 12 pages, 8 figure
Practical Parallel Algorithms for Non-Monotone Submodular Maximization
Submodular maximization has found extensive applications in various domains
within the field of artificial intelligence, including but not limited to
machine learning, computer vision, and natural language processing. With the
increasing size of datasets in these domains, there is a pressing need to
develop efficient and parallelizable algorithms for submodular maximization.
One measure of the parallelizability of a submodular maximization algorithm is
its adaptive complexity, which indicates the number of sequential rounds where
a polynomial number of queries to the objective function can be executed in
parallel. In this paper, we study the problem of non-monotone submodular
maximization subject to a knapsack constraint, and propose the first
combinatorial algorithm achieving an -approximation under
adaptive complexity, which is \textit{optimal} up to a
factor of . Moreover, we also propose the first
algorithm with both provable approximation ratio and sublinear adaptive
complexity for the problem of non-monotone submodular maximization subject to a
-system constraint. As a by-product, we show that our two algorithms can
also be applied to the special case of submodular maximization subject to a
cardinality constraint, and achieve performance bounds comparable with those of
state-of-the-art algorithms. Finally, the effectiveness of our approach is
demonstrated by extensive experiments on real-world applications.Comment: Part of the contribution appears in AAAI-202
Nearly Linear-Time, Parallelizable Algorithms for Non-Monotone Submodular Maximization
We study parallelizable algorithms for maximization of a submodular function,
not necessarily monotone, with respect to a cardinality constraint . We
improve the best approximation factor achieved by an algorithm that has optimal
adaptivity and query complexity, up to logarithmic factors in the size of
the ground set, from to . We provide two
algorithms; the first has approximation ratio , adaptivity , and query complexity , while the second has
approximation ratio , adaptivity , and query
complexity . Heuristic versions of our algorithms are empirically
validated to use a low number of adaptive rounds and total queries while
obtaining solutions with high objective value in comparison with highly
adaptive approximation algorithms.Comment: 24 pages, 2 figure
Interactive Submodular Set Cover
We introduce a natural generalization of submodular set cover and exact
active learning with a finite hypothesis class (query learning). We call this
new problem interactive submodular set cover. Applications include advertising
in social networks with hidden information. We give an approximation guarantee
for a novel greedy algorithm and give a hardness of approximation result which
matches up to constant factors. We also discuss negative results for simpler
approaches and present encouraging early experimental results.Comment: 15 pages, 1 figur
Submodular Maximization with Nearly Optimal Approximation, Adaptivity and Query Complexity
Submodular optimization generalizes many classic problems in combinatorial
optimization and has recently found a wide range of applications in machine
learning (e.g., feature engineering and active learning). For many large-scale
optimization problems, we are often concerned with the adaptivity complexity of
an algorithm, which quantifies the number of sequential rounds where
polynomially-many independent function evaluations can be executed in parallel.
While low adaptivity is ideal, it is not sufficient for a distributed algorithm
to be efficient, since in many practical applications of submodular
optimization the number of function evaluations becomes prohibitively
expensive. Motivated by these applications, we study the adaptivity and query
complexity of adaptive submodular optimization.
Our main result is a distributed algorithm for maximizing a monotone
submodular function with cardinality constraint that achieves a
-approximation in expectation. This algorithm runs in
adaptive rounds and makes calls to the function evaluation
oracle in expectation. The approximation guarantee and query complexity are
optimal, and the adaptivity is nearly optimal. Moreover, the number of queries
is substantially less than in previous works. Last, we extend our results to
the submodular cover problem to demonstrate the generality of our algorithm and
techniques.Comment: 30 pages, Proceedings of the Thirtieth Annual ACM-SIAM Symposium on
Discrete Algorithms (SODA 2019
- …