1,092 research outputs found
Lazier Than Lazy Greedy
Is it possible to maximize a monotone submodular function faster than the
widely used lazy greedy algorithm (also known as accelerated greedy), both in
theory and practice? In this paper, we develop the first linear-time algorithm
for maximizing a general monotone submodular function subject to a cardinality
constraint. We show that our randomized algorithm, STOCHASTIC-GREEDY, can
achieve a approximation guarantee, in expectation, to the
optimum solution in time linear in the size of the data and independent of the
cardinality constraint. We empirically demonstrate the effectiveness of our
algorithm on submodular functions arising in data summarization, including
training large-scale kernel methods, exemplar-based clustering, and sensor
placement. We observe that STOCHASTIC-GREEDY practically achieves the same
utility value as lazy greedy but runs much faster. More surprisingly, we
observe that in many practical scenarios STOCHASTIC-GREEDY does not evaluate
the whole fraction of data points even once and still achieves
indistinguishable results compared to lazy greedy.Comment: In Proc. Conference on Artificial Intelligence (AAAI), 201
Algorithms for Approximate Minimization of the Difference Between Submodular Functions, with Applications
We extend the work of Narasimhan and Bilmes [30] for minimizing set functions
representable as a difference between submodular functions. Similar to [30],
our new algorithms are guaranteed to monotonically reduce the objective
function at every step. We empirically and theoretically show that the
per-iteration cost of our algorithms is much less than [30], and our algorithms
can be used to efficiently minimize a difference between submodular functions
under various combinatorial constraints, a problem not previously addressed. We
provide computational bounds and a hardness result on the mul- tiplicative
inapproximability of minimizing the difference between submodular functions. We
show, however, that it is possible to give worst-case additive bounds by
providing a polynomial time computable lower-bound on the minima. Finally we
show how a number of machine learning problems can be modeled as minimizing the
difference between submodular functions. We experimentally show the validity of
our algorithms by testing them on the problem of feature selection with
submodular cost features.Comment: 17 pages, 8 figures. A shorter version of this appeared in Proc.
Uncertainty in Artificial Intelligence (UAI), Catalina Islands, 201
- …