18 research outputs found
Partial Truthfulness in Minimal Peer Prediction Mechanisms with Limited Knowledge
We study minimal single-task peer prediction mechanisms that have limited
knowledge about agents' beliefs. Without knowing what agents' beliefs are or
eliciting additional information, it is not possible to design a truthful
mechanism in a Bayesian-Nash sense. We go beyond truthfulness and explore
equilibrium strategy profiles that are only partially truthful. Using the
results from the multi-armed bandit literature, we give a characterization of
how inefficient these equilibria are comparing to truthful reporting. We
measure the inefficiency of such strategies by counting the number of dishonest
reports that any minimal knowledge-bounded mechanism must have. We show that
the order of this number is , where is the number of
agents, and we provide a peer prediction mechanism that achieves this bound in
expectation
Buying Private Data without Verification
We consider the problem of designing a survey to aggregate non-verifiable
information from a privacy-sensitive population: an analyst wants to compute
some aggregate statistic from the private bits held by each member of a
population, but cannot verify the correctness of the bits reported by
participants in his survey. Individuals in the population are strategic agents
with a cost for privacy, \ie, they not only account for the payments they
expect to receive from the mechanism, but also their privacy costs from any
information revealed about them by the mechanism's outcome---the computed
statistic as well as the payments---to determine their utilities. How can the
analyst design payments to obtain an accurate estimate of the population
statistic when individuals strategically decide both whether to participate and
whether to truthfully report their sensitive information?
We design a differentially private peer-prediction mechanism that supports
accurate estimation of the population statistic as a Bayes-Nash equilibrium in
settings where agents have explicit preferences for privacy. The mechanism
requires knowledge of the marginal prior distribution on bits , but does
not need full knowledge of the marginal distribution on the costs ,
instead requiring only an approximate upper bound. Our mechanism guarantees
-differential privacy to each agent against any adversary who can
observe the statistical estimate output by the mechanism, as well as the
payments made to the other agents . Finally, we show that with
slightly more structured assumptions on the privacy cost functions of each
agent, the cost of running the survey goes to as the number of agents
diverges.Comment: Appears in EC 201
Eliciting Truthful Measurements from a Community of Sensors
As the Internet of Things grows to large scale, its components will increasingly be controlled by self-interested agents. For example, sensor networks will evolve to community sensing where a community of agents combine their data into a single coherent structure. As there is no central quality control, agents need to be incentivized to provide accurate measurements. We propose game-theoretic mechanisms that provide such incentives and show their application on the example of community sensing for monitoring air pollution. These mechanisms can be applied to most sensing scenarios and allow the Internet of Things to grow to much larger scale than currently exists
Reputation in multi agent systems and the incentives to provide feedback
The emergence of the Internet leads to a vast increase in the number of interactions between parties that are completely alien to each other. In general, such transactions are likely to be subject to fraud and cheating. If such systems use computerized rational agents to negotiate and execute transactions, mechanisms that lead to favorable outcomes for all parties instead of giving rise to defective behavior are necessary to make the system work: trust and reputation mechanisms. This paper examines different incentive mechanisms helping these trust and reputation mechanisms in eliciting users to report own experiences honestly. --Trust,Reputation
Using Incentives to Obtain Truthful Information
There are many scenarios where we would like agents to report their observations or expertise in a truthful way. Game-theoretic principles can be used to provide incentives to do so. I survey several approaches to eliciting truthful information, in particular scoring rules, peer prediction methods and opinion polls, and discuss possible applications
Recommended from our members
Peer Prediction without a Common Prior
Reputation mechanisms at online opinion forums, such as Amazon Reviews, elicit ratings from users about their experience with different products. Crowdsourcing applications, such as image tagging on Amazon Mechanical Turk, elicit votes from users as to whether or not a job was duly completed. An important property in both settings is that the feedback received from users (agents) is truthful. The peer prediction method introduced by Miller et al. [2005] is a prominent theoretical mechanism for the truthful elicitation of reports. However, a significant obstacle to its application is that it critically depends on the assumption of a common prior amongst both the agents and the mechanism. In this paper, we develop a peer prediction mechanism for settings where the agents hold subjective and private beliefs about the state of the world and the likelihood of a positive signal given a particular state. Our shadow peer prediction mechanism exploits temporal structure in order to elicit two reports, a belief report and then a signal report, and it provides strict incentives for truthful reporting as long as the effect an agent's signal has on her posterior belief is bounded away from zero. Alternatively, this technical requirement on beliefs can be dispensed with by a modification in which the second report is a belief report rather than a signal report.Engineering and Applied Science
InterPoll: Crowd-Sourced Internet Polls
Crowd-sourcing is increasingly being used to provide answers to online polls and surveys. However, existing systems, while taking care of the mechanics of attracting crowd workers, poll building, and payment, provide little to help the survey-maker or pollster in obtaining statistically significant results devoid of even the obvious selection biases.
This paper proposes InterPoll, a platform for programming of crowd-sourced polls. Pollsters express polls as embedded LINQ queries and the runtime correctly reasons about uncertainty in those polls, only polling as many people as required to meet statistical guarantees. To optimize the cost of polls, InterPoll performs query optimization, as well as bias correction and power analysis. The goal of InterPoll is to provide a system that can be reliably used for research into marketing, social and political science questions.
This paper highlights some of the existing challenges and how InterPoll is designed to address most of them.
In this paper we summarize some of the work we have already done and give an outline for future work
Eliciting Truthful Measurements from a Community of Sensors
As the Internet of Things grows to large scale, its components will increasingly be controlled by selfinterested agents. For example, sensor networks will evolve to community sensing where a community of agents combine their data into a single coherent structure. As there is no central quality control, agents need to be incentivized to provide accurate measurements. We propose game-theoretic mechanisms that provide such incentives and show their application on the example of community sensing for monitoring air pollution. These mechanisms can be applied to most sensing scenarios and allow the Internet of Things to grow to much larger scale than currently exists