48 research outputs found
Recommended from our members
Peer Prediction without a Common Prior
Reputation mechanisms at online opinion forums, such as Amazon Reviews, elicit ratings from users about their experience with different products. Crowdsourcing applications, such as image tagging on Amazon Mechanical Turk, elicit votes from users as to whether or not a job was duly completed. An important property in both settings is that the feedback received from users (agents) is truthful. The peer prediction method introduced by Miller et al. [2005] is a prominent theoretical mechanism for the truthful elicitation of reports. However, a significant obstacle to its application is that it critically depends on the assumption of a common prior amongst both the agents and the mechanism. In this paper, we develop a peer prediction mechanism for settings where the agents hold subjective and private beliefs about the state of the world and the likelihood of a positive signal given a particular state. Our shadow peer prediction mechanism exploits temporal structure in order to elicit two reports, a belief report and then a signal report, and it provides strict incentives for truthful reporting as long as the effect an agent's signal has on her posterior belief is bounded away from zero. Alternatively, this technical requirement on beliefs can be dispensed with by a modification in which the second report is a belief report rather than a signal report.Engineering and Applied Science
Peer prediction without a common prior
Reputation mechanisms at online opinion forums, such as Amazon Reviews, elicit ratings from users about their experience with different products. Crowdsourcing applications, such as image tagging on Amazon Mechanical Turk, elicit votes from users as to whether or not a job was duly completed. An important property in both settings is that the feedback received from users (agents) is truthful. The peer prediction method introduced by Miller et al. [2005] is a prominent theoretical mechanism for the truthful elicitation of reports. However, a significant obstacle to its application is that it critically depends on the assumption of a common prior amongst both the agents and the mechanism. In this paper, we develop a peer prediction mechanism for settings where the agents hold subjective and private beliefs about the state of the world and the likelihood of a positive signal given a particular state. Our shadow peer prediction mechanism exploits temporal structure in order to elicit two reports, a belief report and then a signal report, and it provides strict incentives for truthful reporting as long as the effect an agentâs signal has on her posterior belief is bounded away from zero. Alternatively, this technical requirement on beliefs can be dispensed with by a modification in which the second report is a belief report rather than a signal report
Partial Truthfulness in Minimal Peer Prediction Mechanisms with Limited Knowledge
We study minimal single-task peer prediction mechanisms that have limited
knowledge about agents' beliefs. Without knowing what agents' beliefs are or
eliciting additional information, it is not possible to design a truthful
mechanism in a Bayesian-Nash sense. We go beyond truthfulness and explore
equilibrium strategy profiles that are only partially truthful. Using the
results from the multi-armed bandit literature, we give a characterization of
how inefficient these equilibria are comparing to truthful reporting. We
measure the inefficiency of such strategies by counting the number of dishonest
reports that any minimal knowledge-bounded mechanism must have. We show that
the order of this number is , where is the number of
agents, and we provide a peer prediction mechanism that achieves this bound in
expectation
Tuning the Diversity of Open-Ended Responses from the Crowd
Crowdsourcing can solve problems that current fully automated systems cannot.
Its effectiveness depends on the reliability, accuracy, and speed of the crowd
workers that drive it. These objectives are frequently at odds with one
another. For instance, how much time should workers be given to discover and
propose new solutions versus deliberate over those currently proposed? How do
we determine if discovering a new answer is appropriate at all? And how do we
manage workers who lack the expertise or attention needed to provide useful
input to a given task? We present a mechanism that uses distinct payoffs for
three possible worker actions---propose,vote, or abstain---to provide workers
with the necessary incentives to guarantee an effective (or even optimal)
balance between searching for new answers, assessing those currently available,
and, when they have insufficient expertise or insight for the task at hand,
abstaining. We provide a novel game theoretic analysis for this mechanism and
test it experimentally on an image---labeling problem and show that it allows a
system to reliably control the balance betweendiscovering new answers and
converging to existing ones
Crowd-sourcing with uncertain quality - an auction approach
This article addresses two important issues in crowd-sourcing: ex ante uncertainty about the quality and cost of different workers and strategic behaviour. We present a novel multi-dimensional auction that incentivises the workers to make partial enquiry into the task and to honestly report quality-cost estimates based on which the crowd-sourcer can choose the worker that offers the best value for money. The mechanism extends second score auction design to settings where the quality is uncertain and it provides incentives to both collect information and deliver desired qualities
Buying Private Data without Verification
We consider the problem of designing a survey to aggregate non-verifiable
information from a privacy-sensitive population: an analyst wants to compute
some aggregate statistic from the private bits held by each member of a
population, but cannot verify the correctness of the bits reported by
participants in his survey. Individuals in the population are strategic agents
with a cost for privacy, \ie, they not only account for the payments they
expect to receive from the mechanism, but also their privacy costs from any
information revealed about them by the mechanism's outcome---the computed
statistic as well as the payments---to determine their utilities. How can the
analyst design payments to obtain an accurate estimate of the population
statistic when individuals strategically decide both whether to participate and
whether to truthfully report their sensitive information?
We design a differentially private peer-prediction mechanism that supports
accurate estimation of the population statistic as a Bayes-Nash equilibrium in
settings where agents have explicit preferences for privacy. The mechanism
requires knowledge of the marginal prior distribution on bits , but does
not need full knowledge of the marginal distribution on the costs ,
instead requiring only an approximate upper bound. Our mechanism guarantees
-differential privacy to each agent against any adversary who can
observe the statistical estimate output by the mechanism, as well as the
payments made to the other agents . Finally, we show that with
slightly more structured assumptions on the privacy cost functions of each
agent, the cost of running the survey goes to as the number of agents
diverges.Comment: Appears in EC 201
Recommended from our members
Machine and social intelligent peer-assessment systems for assessing large student populations in massive open online education
The motivation of the European Etoile project is to create high quality free open education in complex systems science, including quality assured certification. Universities and colleges around the world are increasingly using online platforms to offer courses open to the public. Massive Open Online Courses or MOOCs give millions of people access to lectures delivered by prestigious universities. However, although some of these courses provide certification of attendance and completion, most do not provide any academic or professional recognition since this would imply a rigorous and complete evaluation of the studentâs achievements. Since the number of students enrolled may exceed tens of thousands, it is impractical for a lecturer (or group of lecturers) to evaluate all students using conventional hand marking. Thus in order to be scalable, assessment must be automated. The state-of-the-art in automated assessment includes various methods and computerised tools including multiple choice questions, and intelligent marking techniques (involving complex semantic analysis). However, none of these completely cover the requirements needed for the implementation of an assessment system able to cope with very large populations of students and also able to guarantee the quality of evaluation required for higher education. The goal of this research is to propose, implement and evaluate a computer mediated social interaction system which can be applied to massive online learning communities. This must be a scalable system able to assess fairly and accurately student coursework and examinations. We call this approach âmachine and socially intelligent peer assessmentâ. This paper describes our system and illustrates its application. Our approach combines the concepts of peer assessment and reputation systems to provide an independent computerised system which determines the degree and type of interaction between student peers based on a reputation score which emerges from the marking behaviour of each student and the interaction with other individuals of the community. A simulation experiment will be reported showing how reputation-based social structure can evolve in our peer marking system. A pilot experiment using a population of ninety 16-year old high school students in Colombia measured the marking accuracy of our system by comparing the statistical differences between the scores resulting from teacher marking (the âgold standardâ), peer assessment using average scores, and our intelligent reputation-based peer assessment. This addresses the research question: to what extent does the proposed approach improve peer marking in terms of marking accuracy and fairness? We report the first results of this experiment, summarise the lessons learned, and describe further work