11 research outputs found
Set Covering with Our Eyes Wide Shut
In the stochastic set cover problem (Grandoni et al., FOCS '08), we are given
a collection of sets over a universe of size
, and a distribution over elements of . The algorithm draws
elements one-by-one from and must buy a set to cover each element on
arrival; the goal is to minimize the total cost of sets bought during this
process. A universal algorithm a priori maps each element
to a set such that if is formed by drawing
times from distribution , then the algorithm commits to outputting .
Grandoni et al. gave an -competitive universal algorithm for this
stochastic set cover problem.
We improve unilaterally upon this result by giving a simple, polynomial time
-competitive universal algorithm for the more general prophet
version, in which is formed by drawing from different distributions
. Furthermore, we show that we do not need full foreknowledge
of the distributions: in fact, a single sample from each distribution suffices.
We show similar results for the 2-stage prophet setting and for the
online-with-a-sample setting.
We obtain our results via a generic reduction from the single-sample prophet
setting to the random-order setting; this reduction holds for a broad class of
minimization problems that includes all covering problems. We take advantage of
this framework by giving random-order algorithms for non-metric facility
location and set multicover; using our framework, these automatically translate
to universal prophet algorithms
Can Buyers Reveal for a Better Deal?
We study small-scale market interactions in which buyers are allowed to
credibly reveal partial information about their types to the seller. Previous
recent work has studied the special case where there is one buyer and one good,
showing that such communication can simultaneously improve social welfare and
ex ante buyer utility. With multiple buyers, we find that the buyer-optimal
signalling schemes from the one-buyer case are actually harmful to buyer
welfare. Moreover, we prove several impossibility results showing that, with
either multiple i.i.d. buyers or multiple i.i.d. goods, maximizing buyer
utility can be at odds with social efficiency, which is a surprising contrast
to the one-buyer, one-good case. Finally, we investigate the computational
tractability of implementing desirable equilibrium outcomes. We find that, even
with one buyer and one good, optimizing buyer utility is generally NP-hard, but
tractable in a practical restricted setting
The Distortion of Binomial Voting Defies Expectation
In computational social choice, the distortion of a voting rule quantifies
the degree to which the rule overcomes limited preference information to select
a socially desirable outcome. This concept has been investigated extensively,
but only through a worst-case lens. Instead, we study the expected distortion
of voting rules with respect to an underlying distribution over voter
utilities. Our main contribution is the design and analysis of a novel and
intuitive rule, binomial voting, which provides strong expected distortion
guarantees for all distributions
Representation with Incomplete Votes
Platforms for online civic participation rely heavily on methods for
condensing thousands of comments into a relevant handful, based on whether
participants agree or disagree with them. These methods should guarantee fair
representation of the participants, as their outcomes may affect the health of
the conversation and inform impactful downstream decisions. To that end, we
draw on the literature on approval-based committee elections. Our setting is
novel in that the approval votes are incomplete since participants will
typically not vote on all comments. We prove that this complication renders
non-adaptive algorithms impractical in terms of the amount of information they
must gather. Therefore, we develop an adaptive algorithm that uses information
more efficiently by presenting incoming participants with statements that
appear promising based on votes by previous participants. We prove that this
method satisfies commonly used notions of fair representation, even when
participants only vote on a small fraction of comments. Finally, an empirical
evaluation using real data shows that the proposed algorithm provides
representative outcomes in practice