2,152 research outputs found
Non-monotone Submodular Maximization with Nearly Optimal Adaptivity and Query Complexity
Submodular maximization is a general optimization problem with a wide range
of applications in machine learning (e.g., active learning, clustering, and
feature selection). In large-scale optimization, the parallel running time of
an algorithm is governed by its adaptivity, which measures the number of
sequential rounds needed if the algorithm can execute polynomially-many
independent oracle queries in parallel. While low adaptivity is ideal, it is
not sufficient for an algorithm to be efficient in practice---there are many
applications of distributed submodular optimization where the number of
function evaluations becomes prohibitively expensive. Motivated by these
applications, we study the adaptivity and query complexity of submodular
maximization. In this paper, we give the first constant-factor approximation
algorithm for maximizing a non-monotone submodular function subject to a
cardinality constraint that runs in adaptive rounds and makes
oracle queries in expectation. In our empirical study, we use
three real-world applications to compare our algorithm with several benchmarks
for non-monotone submodular maximization. The results demonstrate that our
algorithm finds competitive solutions using significantly fewer rounds and
queries.Comment: 12 pages, 8 figure
Submodular Optimization with Submodular Cover and Submodular Knapsack Constraints
We investigate two new optimization problems -- minimizing a submodular
function subject to a submodular lower bound constraint (submodular cover) and
maximizing a submodular function subject to a submodular upper bound constraint
(submodular knapsack). We are motivated by a number of real-world applications
in machine learning including sensor placement and data subset selection, which
require maximizing a certain submodular function (like coverage or diversity)
while simultaneously minimizing another (like cooperative cost). These problems
are often posed as minimizing the difference between submodular functions [14,
35] which is in the worst case inapproximable. We show, however, that by
phrasing these problems as constrained optimization, which is more natural for
many applications, we achieve a number of bounded approximation guarantees. We
also show that both these problems are closely related and an approximation
algorithm solving one can be used to obtain an approximation guarantee for the
other. We provide hardness results for both problems thus showing that our
approximation factors are tight up to log-factors. Finally, we empirically
demonstrate the performance and good scalability properties of our algorithms.Comment: 23 pages. A short version of this appeared in Advances of NIPS-201
Budget-Feasible Mechanism Design for Non-Monotone Submodular Objectives: Offline and Online
The framework of budget-feasible mechanism design studies procurement
auctions where the auctioneer (buyer) aims to maximize his valuation function
subject to a hard budget constraint. We study the problem of designing truthful
mechanisms that have good approximation guarantees and never pay the
participating agents (sellers) more than the budget. We focus on the case of
general (non-monotone) submodular valuation functions and derive the first
truthful, budget-feasible and -approximate mechanisms that run in
polynomial time in the value query model, for both offline and online auctions.
Prior to our work, the only -approximation mechanism known for
non-monotone submodular objectives required an exponential number of value
queries.
At the heart of our approach lies a novel greedy algorithm for non-monotone
submodular maximization under a knapsack constraint. Our algorithm builds two
candidate solutions simultaneously (to achieve a good approximation), yet
ensures that agents cannot jump from one solution to the other (to implicitly
enforce truthfulness). Ours is the first mechanism for the problem
where---crucially---the agents are not ordered with respect to their marginal
value per cost. This allows us to appropriately adapt these ideas to the online
setting as well.
To further illustrate the applicability of our approach, we also consider the
case where additional feasibility constraints are present. We obtain
-approximation mechanisms for both monotone and non-monotone submodular
objectives, when the feasible solutions are independent sets of a -system.
With the exception of additive valuation functions, no mechanisms were known
for this setting prior to our work. Finally, we provide lower bounds suggesting
that, when one cares about non-trivial approximation guarantees in polynomial
time, our results are asymptotically best possible.Comment: Accepted to EC 201
Algorithms for Approximate Minimization of the Difference Between Submodular Functions, with Applications
We extend the work of Narasimhan and Bilmes [30] for minimizing set functions
representable as a difference between submodular functions. Similar to [30],
our new algorithms are guaranteed to monotonically reduce the objective
function at every step. We empirically and theoretically show that the
per-iteration cost of our algorithms is much less than [30], and our algorithms
can be used to efficiently minimize a difference between submodular functions
under various combinatorial constraints, a problem not previously addressed. We
provide computational bounds and a hardness result on the mul- tiplicative
inapproximability of minimizing the difference between submodular functions. We
show, however, that it is possible to give worst-case additive bounds by
providing a polynomial time computable lower-bound on the minima. Finally we
show how a number of machine learning problems can be modeled as minimizing the
difference between submodular functions. We experimentally show the validity of
our algorithms by testing them on the problem of feature selection with
submodular cost features.Comment: 17 pages, 8 figures. A shorter version of this appeared in Proc.
Uncertainty in Artificial Intelligence (UAI), Catalina Islands, 201
- …