3,060 research outputs found

    Eliciting and Aggregating Information: An Information Theoretic Approach

    Full text link
    Crowdsourcing---outsourcing tasks to a crowd of workers (e.g. Amazon Mechanical Turk, peer grading for massive open online courseware (MOOCs), scholarly peer review, and Yahoo answers)---is a fast, cheap, and effective method for performing simple tasks even at large scales. Two central problems in this area are: Information Elicitation: how to design reward systems that incentivize high quality feedback from agents; and Information Aggregation: how to aggregate the collected feedback to obtain a high quality forecast. This thesis shows that the combination of game theory, information theory, and learning theory can bring a unified framework to both of the central problems in crowdsourcing area. This thesis builds a natural connection between information elicitation and information aggregation, distills the essence of eliciting and aggregating information to the design of proper information measurements and applies the information measurements to both the central problems: In the setting where information cannot be verified, this thesis proposes a simple yet powerful information theoretical framework, the emph{Mutual Information Paradigm (MIP)}, for information elicitation mechanisms. The framework pays every agent a measure of mutual information between her signal and a peer's signal. The mutual information measurement is required to have the key property that any ``data processing'' on the two random variables will decrease the mutual information between them. We identify such information measures that generalize Shannon mutual information. MIP overcomes the two main challenges in information elicitation without verification: (1) how to incentivize effort and avoid agents colluding to report random or identical responses (2) how to motivate agents who believe they are in the minority to report truthfully. To elicit expertise without verification, this thesis also defines a natural model for this setting based on the assumption that emph{more sophisticated agents know the beliefs of less sophisticated agents} and extends MIP to a mechanism design framework, the emph{Hierarchical Mutual Information Paradigm (HMIP)}, for this setting. Aided by the information measures and the frameworks, this thesis (1) designs several novel information elicitation mechanisms (e.g. the disagreement mechanism, the ff-mutual information mechanism, the multi-hierarchical mutual information mechanism, the common ground mechanism) in various of settings such that honesty and efforts are incentivized and expertise is identified; (2) addresses an important unsupervised learning problem---co-training by reducing it to an information elicitation problem---forecast elicitation without verification.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145809/1/yuqkong_1.pd

    Equilibrium Selection in Information Elicitation without Verification via Information Monotonicity

    Get PDF
    In this paper, we propose a new mechanism - the Disagreement Mechanism - which elicits privately-held, non-variable information from self-interested agents in the single question (peer-prediction) setting. To the best of our knowledge, our Disagreement Mechanism is the first strictly truthful mechanism in the single-question setting that is simultaneously: - Detail-Free: does not need to know the common prior; - Focal: truth-telling pays strictly higher than any other symmetric equilibria excluding some unnatural permutation equilibria; - Small group: the properties of the mechanism hold even for a small number of agents, even in binary signal setting. Our mechanism only asks each agent her signal as well as a forecast of the other agents\u27 signals. Additionally, we show that the focal result is both tight and robust, and we extend it to the case of asymmetric equilibria when the number of agents is sufficiently large

    Dominantly Truthful Multi-task Peer Prediction with a Constant Number of Tasks

    Full text link
    In the setting where participants are asked multiple similar possibly subjective multi-choice questions (e.g. Do you like Panda Express? Y/N; do you like Chick-fil-A? Y/N), a series of peer prediction mechanisms are designed to incentivize honest reports and some of them achieve dominantly truthfulness: truth-telling is a dominant strategy and strictly dominate other "non-permutation strategy" with some mild conditions. However, a major issue hinders the practical usage of those mechanisms: they require the participants to perform an infinite number of tasks. When the participants perform a finite number of tasks, these mechanisms only achieve approximated dominant truthfulness. The existence of a dominantly truthful multi-task peer prediction mechanism that only requires a finite number of tasks remains to be an open question that may have a negative result, even with full prior knowledge. This paper answers this open question by proposing a new mechanism, Determinant based Mutual Information Mechanism (DMI-Mechanism), that is dominantly truthful when the number of tasks is at least 2C and the number of participants is at least 2. C is the number of choices for each question (C=2 for binary-choice questions). In addition to incentivizing honest reports, DMI-Mechanism can also be transferred into an information evaluation rule that identifies high-quality information without verification when there are at least 3 participants. To the best of our knowledge, DMI-Mechanism is the first dominantly truthful mechanism that works for a finite number of tasks, not to say a small constant number of tasks.Comment: To appear in SODA2
    • …
    corecore