11 research outputs found

    Set Covering with Our Eyes Wide Shut

    Full text link
    In the stochastic set cover problem (Grandoni et al., FOCS '08), we are given a collection S\mathcal{S} of mm sets over a universe U\mathcal{U} of size NN, and a distribution DD over elements of U\mathcal{U}. The algorithm draws nn elements one-by-one from DD and must buy a set to cover each element on arrival; the goal is to minimize the total cost of sets bought during this process. A universal algorithm a priori maps each element uUu \in \mathcal{U} to a set S(u)S(u) such that if UUU \subseteq \mathcal{U} is formed by drawing nn times from distribution DD, then the algorithm commits to outputting S(U)S(U). Grandoni et al. gave an O(logmN)O(\log mN)-competitive universal algorithm for this stochastic set cover problem. We improve unilaterally upon this result by giving a simple, polynomial time O(logmn)O(\log mn)-competitive universal algorithm for the more general prophet version, in which UU is formed by drawing from nn different distributions D1,,DnD_1, \ldots, D_n. Furthermore, we show that we do not need full foreknowledge of the distributions: in fact, a single sample from each distribution suffices. We show similar results for the 2-stage prophet setting and for the online-with-a-sample setting. We obtain our results via a generic reduction from the single-sample prophet setting to the random-order setting; this reduction holds for a broad class of minimization problems that includes all covering problems. We take advantage of this framework by giving random-order algorithms for non-metric facility location and set multicover; using our framework, these automatically translate to universal prophet algorithms

    Can Buyers Reveal for a Better Deal?

    Full text link
    We study small-scale market interactions in which buyers are allowed to credibly reveal partial information about their types to the seller. Previous recent work has studied the special case where there is one buyer and one good, showing that such communication can simultaneously improve social welfare and ex ante buyer utility. With multiple buyers, we find that the buyer-optimal signalling schemes from the one-buyer case are actually harmful to buyer welfare. Moreover, we prove several impossibility results showing that, with either multiple i.i.d. buyers or multiple i.i.d. goods, maximizing buyer utility can be at odds with social efficiency, which is a surprising contrast to the one-buyer, one-good case. Finally, we investigate the computational tractability of implementing desirable equilibrium outcomes. We find that, even with one buyer and one good, optimizing buyer utility is generally NP-hard, but tractable in a practical restricted setting

    The phantom steering effect in Q&A websites

    Get PDF

    The Distortion of Binomial Voting Defies Expectation

    Full text link
    In computational social choice, the distortion of a voting rule quantifies the degree to which the rule overcomes limited preference information to select a socially desirable outcome. This concept has been investigated extensively, but only through a worst-case lens. Instead, we study the expected distortion of voting rules with respect to an underlying distribution over voter utilities. Our main contribution is the design and analysis of a novel and intuitive rule, binomial voting, which provides strong expected distortion guarantees for all distributions

    Representation with Incomplete Votes

    Full text link
    Platforms for online civic participation rely heavily on methods for condensing thousands of comments into a relevant handful, based on whether participants agree or disagree with them. These methods should guarantee fair representation of the participants, as their outcomes may affect the health of the conversation and inform impactful downstream decisions. To that end, we draw on the literature on approval-based committee elections. Our setting is novel in that the approval votes are incomplete since participants will typically not vote on all comments. We prove that this complication renders non-adaptive algorithms impractical in terms of the amount of information they must gather. Therefore, we develop an adaptive algorithm that uses information more efficiently by presenting incoming participants with statements that appear promising based on votes by previous participants. We prove that this method satisfies commonly used notions of fair representation, even when participants only vote on a small fraction of comments. Finally, an empirical evaluation using real data shows that the proposed algorithm provides representative outcomes in practice
    corecore