30 research outputs found
Adversarially Robust Submodular Maximization under Knapsack Constraints
We propose the first adversarially robust algorithm for monotone submodular
maximization under single and multiple knapsack constraints with scalable
implementations in distributed and streaming settings. For a single knapsack
constraint, our algorithm outputs a robust summary of almost optimal (up to
polylogarithmic factors) size, from which a constant-factor approximation to
the optimal solution can be constructed. For multiple knapsack constraints, our
approximation is within a constant-factor of the best known non-robust
solution.
We evaluate the performance of our algorithms by comparison to natural
robustifications of existing non-robust algorithms under two objectives: 1)
dominating set for large social network graphs from Facebook and Twitter
collected by the Stanford Network Analysis Project (SNAP), 2) movie
recommendations on a dataset from MovieLens. Experimental results show that our
algorithms give the best objective for a majority of the inputs and show strong
performance even compared to offline algorithms that are given the set of
removals in advance.Comment: To appear in KDD 201
Online Contention Resolution Schemes
We introduce a new rounding technique designed for online optimization
problems, which is related to contention resolution schemes, a technique
initially introduced in the context of submodular function maximization. Our
rounding technique, which we call online contention resolution schemes (OCRSs),
is applicable to many online selection problems, including Bayesian online
selection, oblivious posted pricing mechanisms, and stochastic probing models.
It allows for handling a wide set of constraints, and shares many strong
properties of offline contention resolution schemes. In particular, OCRSs for
different constraint families can be combined to obtain an OCRS for their
intersection. Moreover, we can approximately maximize submodular functions in
the online settings we consider.
We, thus, get a broadly applicable framework for several online selection
problems, which improves on previous approaches in terms of the types of
constraints that can be handled, the objective functions that can be dealt
with, and the assumptions on the strength of the adversary. Furthermore, we
resolve two open problems from the literature; namely, we present the first
constant-factor constrained oblivious posted price mechanism for matroid
constraints, and the first constant-factor algorithm for weighted stochastic
probing with deadlines.Comment: 33 pages. To appear in SODA 201
Interactive Submodular Set Cover
We introduce a natural generalization of submodular set cover and exact
active learning with a finite hypothesis class (query learning). We call this
new problem interactive submodular set cover. Applications include advertising
in social networks with hidden information. We give an approximation guarantee
for a novel greedy algorithm and give a hardness of approximation result which
matches up to constant factors. We also discuss negative results for simpler
approaches and present encouraging early experimental results.Comment: 15 pages, 1 figur
Balancing Utility and Fairness in Submodular Maximization (Technical Report)
Submodular function maximization is central in numerous data science
applications, including data summarization, influence maximization, and
recommendation. In many of these problems, our goal is to find a solution that
maximizes the \emph{average} of the utilities for all users, each measured by a
monotone submodular function. When the population of users is composed of
several demographic groups, another critical problem is whether the utility is
fairly distributed across groups. In the context of submodular optimization, we
seek to improve the welfare of the \emph{least well-off} group, i.e., to
maximize the minimum utility for any group, to ensure fairness. Although the
\emph{utility} and \emph{fairness} objectives are both desirable, they might
contradict each other, and, to our knowledge, little attention has been paid to
optimizing them jointly. In this paper, we propose a novel problem called
\emph{Bicriteria Submodular Maximization} (BSM) to strike a balance between
utility and fairness. Specifically, it requires finding a fixed-size solution
to maximize the utility function, subject to the value of the fairness function
not being below a threshold. Since BSM is inapproximable within any constant
factor in general, we propose efficient data-dependent approximation algorithms
for BSM by converting it into other submodular optimization problems and
utilizing existing algorithms for the converted problems to obtain solutions to
BSM. Using real-world and synthetic datasets, we showcase applications of our
framework in three submodular maximization problems, namely maximum coverage,
influence maximization, and facility location.Comment: 13 pages, 7 figures, under revie
Tight Bounds for Adversarially Robust Streams and Sliding Windows via Difference Estimators
In the adversarially robust streaming model, a stream of elements is
presented to an algorithm and is allowed to depend on the output of the
algorithm at earlier times during the stream. In the classic insertion-only
model of data streams, Ben-Eliezer et. al. (PODS 2020, best paper award) show
how to convert a non-robust algorithm into a robust one with a roughly
factor overhead. This was subsequently improved to a
factor overhead by Hassidim et. al. (NeurIPS 2020, oral
presentation), suppressing logarithmic factors. For general functions the
latter is known to be best-possible, by a result of Kaplan et. al. (CRYPTO
2021). We show how to bypass this impossibility result by developing data
stream algorithms for a large class of streaming problems, with no overhead in
the approximation factor. Our class of streaming problems includes the most
well-studied problems such as the -heavy hitters problem, -moment
estimation, as well as empirical entropy estimation. We substantially improve
upon all prior work on these problems, giving the first optimal dependence on
the approximation factor.
As in previous work, we obtain a general transformation that applies to any
non-robust streaming algorithm and depends on the so-called flip number.
However, the key technical innovation is that we apply the transformation to
what we call a difference estimator for the streaming problem, rather than an
estimator for the streaming problem itself. We then develop the first
difference estimators for a wide range of problems. Our difference estimator
methodology is not only applicable to the adversarially robust model, but to
other streaming models where temporal properties of the data play a central
role. (Abstract shortened to meet arXiv limit.)Comment: FOCS 202