16 research outputs found

    Dominantly Truthful Multi-task Peer Prediction with a Constant Number of Tasks

    Full text link
    In the setting where participants are asked multiple similar possibly subjective multi-choice questions (e.g. Do you like Panda Express? Y/N; do you like Chick-fil-A? Y/N), a series of peer prediction mechanisms are designed to incentivize honest reports and some of them achieve dominantly truthfulness: truth-telling is a dominant strategy and strictly dominate other "non-permutation strategy" with some mild conditions. However, a major issue hinders the practical usage of those mechanisms: they require the participants to perform an infinite number of tasks. When the participants perform a finite number of tasks, these mechanisms only achieve approximated dominant truthfulness. The existence of a dominantly truthful multi-task peer prediction mechanism that only requires a finite number of tasks remains to be an open question that may have a negative result, even with full prior knowledge. This paper answers this open question by proposing a new mechanism, Determinant based Mutual Information Mechanism (DMI-Mechanism), that is dominantly truthful when the number of tasks is at least 2C and the number of participants is at least 2. C is the number of choices for each question (C=2 for binary-choice questions). In addition to incentivizing honest reports, DMI-Mechanism can also be transferred into an information evaluation rule that identifies high-quality information without verification when there are at least 3 participants. To the best of our knowledge, DMI-Mechanism is the first dominantly truthful mechanism that works for a finite number of tasks, not to say a small constant number of tasks.Comment: To appear in SODA2

    Partial Truthfulness in Minimal Peer Prediction Mechanisms with Limited Knowledge

    Full text link
    We study minimal single-task peer prediction mechanisms that have limited knowledge about agents' beliefs. Without knowing what agents' beliefs are or eliciting additional information, it is not possible to design a truthful mechanism in a Bayesian-Nash sense. We go beyond truthfulness and explore equilibrium strategy profiles that are only partially truthful. Using the results from the multi-armed bandit literature, we give a characterization of how inefficient these equilibria are comparing to truthful reporting. We measure the inefficiency of such strategies by counting the number of dishonest reports that any minimal knowledge-bounded mechanism must have. We show that the order of this number is Θ(logn)\Theta(\log n), where nn is the number of agents, and we provide a peer prediction mechanism that achieves this bound in expectation

    Equilibrium Selection in Information Elicitation without Verification via Information Monotonicity

    Get PDF
    In this paper, we propose a new mechanism - the Disagreement Mechanism - which elicits privately-held, non-variable information from self-interested agents in the single question (peer-prediction) setting. To the best of our knowledge, our Disagreement Mechanism is the first strictly truthful mechanism in the single-question setting that is simultaneously: - Detail-Free: does not need to know the common prior; - Focal: truth-telling pays strictly higher than any other symmetric equilibria excluding some unnatural permutation equilibria; - Small group: the properties of the mechanism hold even for a small number of agents, even in binary signal setting. Our mechanism only asks each agent her signal as well as a forecast of the other agents\u27 signals. Additionally, we show that the focal result is both tight and robust, and we extend it to the case of asymmetric equilibria when the number of agents is sufficiently large
    corecore