119 research outputs found

    Nearly Linear-Time, Parallelizable Algorithms for Non-Monotone Submodular Maximization

    Full text link
    We study parallelizable algorithms for maximization of a submodular function, not necessarily monotone, with respect to a cardinality constraint kk. We improve the best approximation factor achieved by an algorithm that has optimal adaptivity and query complexity, up to logarithmic factors in the size nn of the ground set, from 0.039ϵ0.039 - \epsilon to 0.193ϵ0.193 - \epsilon. We provide two algorithms; the first has approximation ratio 1/6ϵ1/6 - \epsilon, adaptivity O(logn)O( \log n ), and query complexity O(nlogk)O( n \log k ), while the second has approximation ratio 0.193ϵ0.193 - \epsilon, adaptivity O(log2n)O( \log^2 n ), and query complexity O(nlogk)O(n \log k). Heuristic versions of our algorithms are empirically validated to use a low number of adaptive rounds and total queries while obtaining solutions with high objective value in comparison with highly adaptive approximation algorithms.Comment: 24 pages, 2 figure

    Submodular Maximization with Nearly Optimal Approximation, Adaptivity and Query Complexity

    Full text link
    Submodular optimization generalizes many classic problems in combinatorial optimization and has recently found a wide range of applications in machine learning (e.g., feature engineering and active learning). For many large-scale optimization problems, we are often concerned with the adaptivity complexity of an algorithm, which quantifies the number of sequential rounds where polynomially-many independent function evaluations can be executed in parallel. While low adaptivity is ideal, it is not sufficient for a distributed algorithm to be efficient, since in many practical applications of submodular optimization the number of function evaluations becomes prohibitively expensive. Motivated by these applications, we study the adaptivity and query complexity of adaptive submodular optimization. Our main result is a distributed algorithm for maximizing a monotone submodular function with cardinality constraint kk that achieves a (11/eε)(1-1/e-\varepsilon)-approximation in expectation. This algorithm runs in O(log(n))O(\log(n)) adaptive rounds and makes O(n)O(n) calls to the function evaluation oracle in expectation. The approximation guarantee and query complexity are optimal, and the adaptivity is nearly optimal. Moreover, the number of queries is substantially less than in previous works. Last, we extend our results to the submodular cover problem to demonstrate the generality of our algorithm and techniques.Comment: 30 pages, Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2019

    Non-monotone Submodular Maximization with Nearly Optimal Adaptivity and Query Complexity

    Full text link
    Submodular maximization is a general optimization problem with a wide range of applications in machine learning (e.g., active learning, clustering, and feature selection). In large-scale optimization, the parallel running time of an algorithm is governed by its adaptivity, which measures the number of sequential rounds needed if the algorithm can execute polynomially-many independent oracle queries in parallel. While low adaptivity is ideal, it is not sufficient for an algorithm to be efficient in practice---there are many applications of distributed submodular optimization where the number of function evaluations becomes prohibitively expensive. Motivated by these applications, we study the adaptivity and query complexity of submodular maximization. In this paper, we give the first constant-factor approximation algorithm for maximizing a non-monotone submodular function subject to a cardinality constraint kk that runs in O(log(n))O(\log(n)) adaptive rounds and makes O(nlog(k))O(n \log(k)) oracle queries in expectation. In our empirical study, we use three real-world applications to compare our algorithm with several benchmarks for non-monotone submodular maximization. The results demonstrate that our algorithm finds competitive solutions using significantly fewer rounds and queries.Comment: 12 pages, 8 figure

    Interactive Submodular Set Cover

    Full text link
    We introduce a natural generalization of submodular set cover and exact active learning with a finite hypothesis class (query learning). We call this new problem interactive submodular set cover. Applications include advertising in social networks with hidden information. We give an approximation guarantee for a novel greedy algorithm and give a hardness of approximation result which matches up to constant factors. We also discuss negative results for simpler approaches and present encouraging early experimental results.Comment: 15 pages, 1 figur

    Practical Parallel Algorithms for Non-Monotone Submodular Maximization

    Full text link
    Submodular maximization has found extensive applications in various domains within the field of artificial intelligence, including but not limited to machine learning, computer vision, and natural language processing. With the increasing size of datasets in these domains, there is a pressing need to develop efficient and parallelizable algorithms for submodular maximization. One measure of the parallelizability of a submodular maximization algorithm is its adaptive complexity, which indicates the number of sequential rounds where a polynomial number of queries to the objective function can be executed in parallel. In this paper, we study the problem of non-monotone submodular maximization subject to a knapsack constraint, and propose the first combinatorial algorithm achieving an (8+ϵ)(8+\epsilon)-approximation under O(logn)\mathcal{O}(\log n) adaptive complexity, which is \textit{optimal} up to a factor of O(loglogn)\mathcal{O}(\log\log n). Moreover, we also propose the first algorithm with both provable approximation ratio and sublinear adaptive complexity for the problem of non-monotone submodular maximization subject to a kk-system constraint. As a by-product, we show that our two algorithms can also be applied to the special case of submodular maximization subject to a cardinality constraint, and achieve performance bounds comparable with those of state-of-the-art algorithms. Finally, the effectiveness of our approach is demonstrated by extensive experiments on real-world applications.Comment: Part of the contribution appears in AAAI-202
    corecore