55 research outputs found

    Information Gathering with Peers: Submodular Optimization with Peer-Prediction Constraints

    Full text link
    We study a problem of optimal information gathering from multiple data providers that need to be incentivized to provide accurate information. This problem arises in many real world applications that rely on crowdsourced data sets, but where the process of obtaining data is costly. A notable example of such a scenario is crowd sensing. To this end, we formulate the problem of optimal information gathering as maximization of a submodular function under a budget constraint, where the budget represents the total expected payment to data providers. Contrary to the existing approaches, we base our payments on incentives for accuracy and truthfulness, in particular, {\em peer-prediction} methods that score each of the selected data providers against its best peer, while ensuring that the minimum expected payment is above a given threshold. We first show that the problem at hand is hard to approximate within a constant factor that is not dependent on the properties of the payment function. However, for given topological and analytical properties of the instance, we construct two greedy algorithms, respectively called PPCGreedy and PPCGreedyIter, and establish theoretical bounds on their performance w.r.t. the optimal solution. Finally, we evaluate our methods using a realistic crowd sensing testbed.Comment: Longer version of AAAI'18 pape

    Partial Truthfulness in Minimal Peer Prediction Mechanisms with Limited Knowledge

    Full text link
    We study minimal single-task peer prediction mechanisms that have limited knowledge about agents' beliefs. Without knowing what agents' beliefs are or eliciting additional information, it is not possible to design a truthful mechanism in a Bayesian-Nash sense. We go beyond truthfulness and explore equilibrium strategy profiles that are only partially truthful. Using the results from the multi-armed bandit literature, we give a characterization of how inefficient these equilibria are comparing to truthful reporting. We measure the inefficiency of such strategies by counting the number of dishonest reports that any minimal knowledge-bounded mechanism must have. We show that the order of this number is Θ(logn)\Theta(\log n), where nn is the number of agents, and we provide a peer prediction mechanism that achieves this bound in expectation

    Optimum Statistical Estimation with Strategic Data Sources

    Full text link
    We propose an optimum mechanism for providing monetary incentives to the data sources of a statistical estimator such as linear regression, so that high quality data is provided at low cost, in the sense that the sum of payments and estimation error is minimized. The mechanism applies to a broad range of estimators, including linear and polynomial regression, kernel regression, and, under some additional assumptions, ridge regression. It also generalizes to several objectives, including minimizing estimation error subject to budget constraints. Besides our concrete results for regression problems, we contribute a mechanism design framework through which to design and analyze statistical estimators whose examples are supplied by workers with cost for labeling said examples

    Dominantly Truthful Multi-task Peer Prediction with a Constant Number of Tasks

    Full text link
    In the setting where participants are asked multiple similar possibly subjective multi-choice questions (e.g. Do you like Panda Express? Y/N; do you like Chick-fil-A? Y/N), a series of peer prediction mechanisms are designed to incentivize honest reports and some of them achieve dominantly truthfulness: truth-telling is a dominant strategy and strictly dominate other "non-permutation strategy" with some mild conditions. However, a major issue hinders the practical usage of those mechanisms: they require the participants to perform an infinite number of tasks. When the participants perform a finite number of tasks, these mechanisms only achieve approximated dominant truthfulness. The existence of a dominantly truthful multi-task peer prediction mechanism that only requires a finite number of tasks remains to be an open question that may have a negative result, even with full prior knowledge. This paper answers this open question by proposing a new mechanism, Determinant based Mutual Information Mechanism (DMI-Mechanism), that is dominantly truthful when the number of tasks is at least 2C and the number of participants is at least 2. C is the number of choices for each question (C=2 for binary-choice questions). In addition to incentivizing honest reports, DMI-Mechanism can also be transferred into an information evaluation rule that identifies high-quality information without verification when there are at least 3 participants. To the best of our knowledge, DMI-Mechanism is the first dominantly truthful mechanism that works for a finite number of tasks, not to say a small constant number of tasks.Comment: To appear in SODA2

    Civic Crowdfunding for Agents with Negative Valuations and Agents with Asymmetric Beliefs

    Full text link
    In the last decade, civic crowdfunding has proved to be effective in generating funds for the provision of public projects. However, the existing literature deals only with citizen's with positive valuation and symmetric belief towards the project's provision. In this work, we present novel mechanisms which break these two barriers, i.e., mechanisms which incorporate negative valuation and asymmetric belief, independently. For negative valuation, we present a methodology for converting existing mechanisms to mechanisms that incorporate agents with negative valuations. Particularly, we adapt existing PPR and PPS mechanisms, to present novel PPRN and PPSN mechanisms which incentivize strategic agents to contribute to the project based on their true preference. With respect to asymmetric belief, we propose a reward scheme Belief Based Reward (BBR) based on Robust Bayesian Truth Serum mechanism. With BBR, we propose a general mechanism for civic crowdfunding which incorporates asymmetric agents. We leverage PPR and PPS, to present PPRx and PPSx. We prove that in PPRx and PPSx, agents with greater belief towards the project's provision contribute more than agents with lesser belief. Further, we also show that contributions are such that the project is provisioned at equilibrium.Comment: Accepted as full paper in IJCAI 201

    Deep Bayesian Trust : A Dominant and Fair Incentive Mechanism for Crowd

    Full text link
    An important class of game-theoretic incentive mechanisms for eliciting effort from a crowd are the peer based mechanisms, in which workers are paid by matching their answers with one another. The other classic mechanism is to have the workers solve some gold standard tasks and pay them according to their accuracy on gold tasks. This mechanism ensures stronger incentive compatibility than the peer based mechanisms but assigning gold tasks to all workers becomes inefficient at large scale. We propose a novel mechanism that assigns gold tasks to only a few workers and exploits transitivity to derive accuracy of the rest of the workers from their peers' accuracy. We show that the resulting mechanism ensures a dominant notion of incentive compatibility and fairness
    corecore