18 research outputs found

    Partial Truthfulness in Minimal Peer Prediction Mechanisms with Limited Knowledge

    Full text link
    We study minimal single-task peer prediction mechanisms that have limited knowledge about agents' beliefs. Without knowing what agents' beliefs are or eliciting additional information, it is not possible to design a truthful mechanism in a Bayesian-Nash sense. We go beyond truthfulness and explore equilibrium strategy profiles that are only partially truthful. Using the results from the multi-armed bandit literature, we give a characterization of how inefficient these equilibria are comparing to truthful reporting. We measure the inefficiency of such strategies by counting the number of dishonest reports that any minimal knowledge-bounded mechanism must have. We show that the order of this number is Θ(log⁥n)\Theta(\log n), where nn is the number of agents, and we provide a peer prediction mechanism that achieves this bound in expectation

    Buying Private Data without Verification

    Get PDF
    We consider the problem of designing a survey to aggregate non-verifiable information from a privacy-sensitive population: an analyst wants to compute some aggregate statistic from the private bits held by each member of a population, but cannot verify the correctness of the bits reported by participants in his survey. Individuals in the population are strategic agents with a cost for privacy, \ie, they not only account for the payments they expect to receive from the mechanism, but also their privacy costs from any information revealed about them by the mechanism's outcome---the computed statistic as well as the payments---to determine their utilities. How can the analyst design payments to obtain an accurate estimate of the population statistic when individuals strategically decide both whether to participate and whether to truthfully report their sensitive information? We design a differentially private peer-prediction mechanism that supports accurate estimation of the population statistic as a Bayes-Nash equilibrium in settings where agents have explicit preferences for privacy. The mechanism requires knowledge of the marginal prior distribution on bits bib_i, but does not need full knowledge of the marginal distribution on the costs cic_i, instead requiring only an approximate upper bound. Our mechanism guarantees Ï”\epsilon-differential privacy to each agent ii against any adversary who can observe the statistical estimate output by the mechanism, as well as the payments made to the n−1n-1 other agents j≠ij\neq i. Finally, we show that with slightly more structured assumptions on the privacy cost functions of each agent, the cost of running the survey goes to 00 as the number of agents diverges.Comment: Appears in EC 201

    Eliciting Truthful Measurements from a Community of Sensors

    Get PDF
    As the Internet of Things grows to large scale, its components will increasingly be controlled by self-interested agents. For example, sensor networks will evolve to community sensing where a community of agents combine their data into a single coherent structure. As there is no central quality control, agents need to be incentivized to provide accurate measurements. We propose game-theoretic mechanisms that provide such incentives and show their application on the example of community sensing for monitoring air pollution. These mechanisms can be applied to most sensing scenarios and allow the Internet of Things to grow to much larger scale than currently exists

    Reputation in multi agent systems and the incentives to provide feedback

    Get PDF
    The emergence of the Internet leads to a vast increase in the number of interactions between parties that are completely alien to each other. In general, such transactions are likely to be subject to fraud and cheating. If such systems use computerized rational agents to negotiate and execute transactions, mechanisms that lead to favorable outcomes for all parties instead of giving rise to defective behavior are necessary to make the system work: trust and reputation mechanisms. This paper examines different incentive mechanisms helping these trust and reputation mechanisms in eliciting users to report own experiences honestly. --Trust,Reputation

    Using Incentives to Obtain Truthful Information

    Get PDF
    There are many scenarios where we would like agents to report their observations or expertise in a truthful way. Game-theoretic principles can be used to provide incentives to do so. I survey several approaches to eliciting truthful information, in particular scoring rules, peer prediction methods and opinion polls, and discuss possible applications

    InterPoll: Crowd-Sourced Internet Polls

    Get PDF
    Crowd-sourcing is increasingly being used to provide answers to online polls and surveys. However, existing systems, while taking care of the mechanics of attracting crowd workers, poll building, and payment, provide little to help the survey-maker or pollster in obtaining statistically significant results devoid of even the obvious selection biases. This paper proposes InterPoll, a platform for programming of crowd-sourced polls. Pollsters express polls as embedded LINQ queries and the runtime correctly reasons about uncertainty in those polls, only polling as many people as required to meet statistical guarantees. To optimize the cost of polls, InterPoll performs query optimization, as well as bias correction and power analysis. The goal of InterPoll is to provide a system that can be reliably used for research into marketing, social and political science questions. This paper highlights some of the existing challenges and how InterPoll is designed to address most of them. In this paper we summarize some of the work we have already done and give an outline for future work

    Eliciting Truthful Measurements from a Community of Sensors

    Get PDF
    As the Internet of Things grows to large scale, its components will increasingly be controlled by selfinterested agents. For example, sensor networks will evolve to community sensing where a community of agents combine their data into a single coherent structure. As there is no central quality control, agents need to be incentivized to provide accurate measurements. We propose game-theoretic mechanisms that provide such incentives and show their application on the example of community sensing for monitoring air pollution. These mechanisms can be applied to most sensing scenarios and allow the Internet of Things to grow to much larger scale than currently exists
    corecore