20,882 research outputs found
Submodular Optimization with Submodular Cover and Submodular Knapsack Constraints
We investigate two new optimization problems -- minimizing a submodular
function subject to a submodular lower bound constraint (submodular cover) and
maximizing a submodular function subject to a submodular upper bound constraint
(submodular knapsack). We are motivated by a number of real-world applications
in machine learning including sensor placement and data subset selection, which
require maximizing a certain submodular function (like coverage or diversity)
while simultaneously minimizing another (like cooperative cost). These problems
are often posed as minimizing the difference between submodular functions [14,
35] which is in the worst case inapproximable. We show, however, that by
phrasing these problems as constrained optimization, which is more natural for
many applications, we achieve a number of bounded approximation guarantees. We
also show that both these problems are closely related and an approximation
algorithm solving one can be used to obtain an approximation guarantee for the
other. We provide hardness results for both problems thus showing that our
approximation factors are tight up to log-factors. Finally, we empirically
demonstrate the performance and good scalability properties of our algorithms.Comment: 23 pages. A short version of this appeared in Advances of NIPS-201
The Power of Randomization: Distributed Submodular Maximization on Massive Datasets
A wide variety of problems in machine learning, including exemplar
clustering, document summarization, and sensor placement, can be cast as
constrained submodular maximization problems. Unfortunately, the resulting
submodular optimization problems are often too large to be solved on a single
machine. We develop a simple distributed algorithm that is embarrassingly
parallel and it achieves provable, constant factor, worst-case approximation
guarantees. In our experiments, we demonstrate its efficiency in large problems
with different kinds of constraints with objective values always close to what
is achievable in the centralized setting
On sparse representations of linear operators and the approximation of matrix products
Thus far, sparse representations have been exploited largely in the context
of robustly estimating functions in a noisy environment from a few
measurements. In this context, the existence of a basis in which the signal
class under consideration is sparse is used to decrease the number of necessary
measurements while controlling the approximation error. In this paper, we
instead focus on applications in numerical analysis, by way of sparse
representations of linear operators with the objective of minimizing the number
of operations needed to perform basic operations (here, multiplication) on
these operators. We represent a linear operator by a sum of rank-one operators,
and show how a sparse representation that guarantees a low approximation error
for the product can be obtained from analyzing an induced quadratic form. This
construction in turn yields new algorithms for computing approximate matrix
products.Comment: 6 pages, 3 figures; presented at the 42nd Annual Conference on
Information Sciences and Systems (CISS 2008
- …