14 research outputs found
Adversarially Robust Submodular Maximization under Knapsack Constraints
We propose the first adversarially robust algorithm for monotone submodular
maximization under single and multiple knapsack constraints with scalable
implementations in distributed and streaming settings. For a single knapsack
constraint, our algorithm outputs a robust summary of almost optimal (up to
polylogarithmic factors) size, from which a constant-factor approximation to
the optimal solution can be constructed. For multiple knapsack constraints, our
approximation is within a constant-factor of the best known non-robust
solution.
We evaluate the performance of our algorithms by comparison to natural
robustifications of existing non-robust algorithms under two objectives: 1)
dominating set for large social network graphs from Facebook and Twitter
collected by the Stanford Network Analysis Project (SNAP), 2) movie
recommendations on a dataset from MovieLens. Experimental results show that our
algorithms give the best objective for a majority of the inputs and show strong
performance even compared to offline algorithms that are given the set of
removals in advance.Comment: To appear in KDD 201
Submodular Maximization with Nearly Optimal Approximation, Adaptivity and Query Complexity
Submodular optimization generalizes many classic problems in combinatorial
optimization and has recently found a wide range of applications in machine
learning (e.g., feature engineering and active learning). For many large-scale
optimization problems, we are often concerned with the adaptivity complexity of
an algorithm, which quantifies the number of sequential rounds where
polynomially-many independent function evaluations can be executed in parallel.
While low adaptivity is ideal, it is not sufficient for a distributed algorithm
to be efficient, since in many practical applications of submodular
optimization the number of function evaluations becomes prohibitively
expensive. Motivated by these applications, we study the adaptivity and query
complexity of adaptive submodular optimization.
Our main result is a distributed algorithm for maximizing a monotone
submodular function with cardinality constraint that achieves a
-approximation in expectation. This algorithm runs in
adaptive rounds and makes calls to the function evaluation
oracle in expectation. The approximation guarantee and query complexity are
optimal, and the adaptivity is nearly optimal. Moreover, the number of queries
is substantially less than in previous works. Last, we extend our results to
the submodular cover problem to demonstrate the generality of our algorithm and
techniques.Comment: 30 pages, Proceedings of the Thirtieth Annual ACM-SIAM Symposium on
Discrete Algorithms (SODA 2019
Submodular Maximization with Matroid and Packing Constraints in Parallel
We consider the problem of maximizing the multilinear extension of a
submodular function subject a single matroid constraint or multiple packing
constraints with a small number of adaptive rounds of evaluation queries.
We obtain the first algorithms with low adaptivity for submodular
maximization with a matroid constraint. Our algorithms achieve a
approximation for monotone functions and a
approximation for non-monotone functions, which nearly matches the best
guarantees known in the fully adaptive setting. The number of rounds of
adaptivity is , which is an exponential speedup over
the existing algorithms.
We obtain the first parallel algorithm for non-monotone submodular
maximization subject to packing constraints. Our algorithm achieves a
approximation using parallel rounds, which is again an exponential speedup
in parallel time over the existing algorithms. For monotone functions, we
obtain a approximation in
parallel rounds. The number of parallel
rounds of our algorithm matches that of the state of the art algorithm for
solving packing LPs with a linear objective.
Our results apply more generally to the problem of maximizing a diminishing
returns submodular (DR-submodular) function
Approximability of Monotone Submodular Function Maximization under Cardinality and Matroid Constraints in the Streaming Model
Maximizing a monotone submodular function under various constraints is a
classical and intensively studied problem. However, in the single-pass
streaming model, where the elements arrive one by one and an algorithm can
store only a small fraction of input elements, there is much gap in our
knowledge, even though several approximation algorithms have been proposed in
the literature.
In this work, we present the first lower bound on the approximation ratios
for cardinality and matroid constraints that beat in the
single-pass streaming model. Let be the number of elements in the stream.
Then, we prove that any (randomized) streaming algorithm for a cardinality
constraint with approximation ratio requires
space for any , where is
the size limit of the output set. We also prove that any (randomized) streaming
algorithm for a (partition) matroid constraint with approximation ratio
requires space
for any , where is the rank of the given matroid.
In addition, we give streaming algorithms when we only have a weak oracle
with which we can only evaluate function values on feasible sets. Specifically,
we show weak-oracle streaming algorithms for cardinality and matroid
constraints with approximation ratios and ,
respectively, whose space complexity is exponential in but is independent
of . The former one exactly matches the known inapproximability result for a
cardinality constraint in the weak oracle model.
The latter one almost matches our lower bound of for a
matroid constraint, which almost settles the approximation ratio for a matroid
constraint that can be obtained by a streaming algorithm whose space complexity
is independent of