457 research outputs found
Lazier Than Lazy Greedy
Is it possible to maximize a monotone submodular function faster than the
widely used lazy greedy algorithm (also known as accelerated greedy), both in
theory and practice? In this paper, we develop the first linear-time algorithm
for maximizing a general monotone submodular function subject to a cardinality
constraint. We show that our randomized algorithm, STOCHASTIC-GREEDY, can
achieve a approximation guarantee, in expectation, to the
optimum solution in time linear in the size of the data and independent of the
cardinality constraint. We empirically demonstrate the effectiveness of our
algorithm on submodular functions arising in data summarization, including
training large-scale kernel methods, exemplar-based clustering, and sensor
placement. We observe that STOCHASTIC-GREEDY practically achieves the same
utility value as lazy greedy but runs much faster. More surprisingly, we
observe that in many practical scenarios STOCHASTIC-GREEDY does not evaluate
the whole fraction of data points even once and still achieves
indistinguishable results compared to lazy greedy.Comment: In Proc. Conference on Artificial Intelligence (AAAI), 201
Dynamic Resource Allocation in Conservation Planning
Consider the problem of protecting endangered species by
selecting patches of land to be used for conservation purposes.
Typically, the availability of patches changes over time, and
recommendations must be made dynamically. This is a challenging
prototypical example of a sequential optimization
problem under uncertainty in computational sustainability. Existing
techniques do not scale to problems of realistic size. In
this paper, we develop an efficient algorithm for adaptively
making recommendations for dynamic conservation planning,
and prove that it obtains near-optimal performance. We further
evaluate our approach on a detailed reserve design case study
of conservation planning for three rare species in the Pacific
Northwest of the United States
Constrained Submodular Maximization: Beyond 1/e
In this work, we present a new algorithm for maximizing a non-monotone
submodular function subject to a general constraint. Our algorithm finds an
approximate fractional solution for maximizing the multilinear extension of the
function over a down-closed polytope. The approximation guarantee is 0.372 and
it is the first improvement over the 1/e approximation achieved by the unified
Continuous Greedy algorithm [Feldman et al., FOCS 2011]
Random Feature-based Online Multi-kernel Learning in Environments with Unknown Dynamics
Kernel-based methods exhibit well-documented performance in various nonlinear
learning tasks. Most of them rely on a preselected kernel, whose prudent choice
presumes task-specific prior information. Especially when the latter is not
available, multi-kernel learning has gained popularity thanks to its
flexibility in choosing kernels from a prescribed kernel dictionary. Leveraging
the random feature approximation and its recent orthogonality-promoting
variant, the present contribution develops a scalable multi-kernel learning
scheme (termed Raker) to obtain the sought nonlinear learning function `on the
fly,' first for static environments. To further boost performance in dynamic
environments, an adaptive multi-kernel learning scheme (termed AdaRaker) is
developed. AdaRaker accounts not only for data-driven learning of kernel
combination, but also for the unknown dynamics. Performance is analyzed in
terms of both static and dynamic regrets. AdaRaker is uniquely capable of
tracking nonlinear learning functions in environments with unknown dynamics,
and with with analytic performance guarantees. Tests with synthetic and real
datasets are carried out to showcase the effectiveness of the novel algorithms.Comment: 36 page
Submodular Optimization with Contention Resolution Extensions
This paper considers optimizing a submodular function subject to a set of downward closed constraints. Previous literature on this problem has often constructed solutions by (1) discovering a fractional solution to the multi-linear extension and (2) rounding this solution to an integral solution via a contention resolution scheme. This line of research has improved results by either optimizing (1) or (2).
Diverging from previous work, this paper introduces a principled method called contention resolution extensions of submodular functions. A contention resolution extension combines the contention resolution scheme into a continuous extension of a discrete submodular function. The contention resolution extension can be defined from effectively any contention resolution scheme. In the case where there is a loss in both (1) and (2), by optimizing them together, the losses can be combined resulting in an overall improvement. This paper showcases the concept by demonstrating that for the problem of optimizing a non-monotone submodular subject to the elements forming an independent set in an interval graph, the algorithm gives a .188-approximation. This improves upon the best known 1/(2e)~eq .1839 approximation
Adversarially Robust Submodular Maximization under Knapsack Constraints
We propose the first adversarially robust algorithm for monotone submodular
maximization under single and multiple knapsack constraints with scalable
implementations in distributed and streaming settings. For a single knapsack
constraint, our algorithm outputs a robust summary of almost optimal (up to
polylogarithmic factors) size, from which a constant-factor approximation to
the optimal solution can be constructed. For multiple knapsack constraints, our
approximation is within a constant-factor of the best known non-robust
solution.
We evaluate the performance of our algorithms by comparison to natural
robustifications of existing non-robust algorithms under two objectives: 1)
dominating set for large social network graphs from Facebook and Twitter
collected by the Stanford Network Analysis Project (SNAP), 2) movie
recommendations on a dataset from MovieLens. Experimental results show that our
algorithms give the best objective for a majority of the inputs and show strong
performance even compared to offline algorithms that are given the set of
removals in advance.Comment: To appear in KDD 201
- …