3 research outputs found

    Incentives for Answering Hypothetical Questions

    Get PDF
    Prediction markets and other reward mechanisms based on proper scoring rules can elicit accurate predictions about future events with public utcomes. However, many questions of public interest do not always have a clear answer. For example, facts such as the eects of raising or lower-ing interest rates can never be publicly veried, since only one option will be implemented. In this paper we address re-porting incentives for opinion polls and uestionnaires about hypothetical questions, where the honesty of one answer can only be assessed in the context of the other answers elicited through the poll. We extend our previous work on this problem by four main results. First, we prove that no reward mechanism can be strictly incentive compatible when the mechanism designer does not know the prior nformation of the participants. Second, we formalize the notion of help- ful reporting which prescribes that rational agents move the public result of the poll towards what they believe to be the true distribution (even when that involves reporting an answer that is not the agent's rst preference). Third, we show that helpful reporting converges the nal result of the poll to the true distribution of opinions. Finally, we present a reward scheme that makes helpful reporting an equilibrium for polls with an arbitrary number of answers. Our mechanism is simple, and does not require information about the prior beliefs of the agents

    Robust mechanisms for information elicitation

    Get PDF
    Abstract. We study information elicitation mechanisms in which a principal agent attempts to elicit the private information of other agents using a carefully selected payment scheme based on proper scoring rules. Scoring rules, like many other mechanisms set in a probabilistic environment, assume that all participating agents share some common belief about the underlying probability of events. In real-life situations however, the underlying distributions are not known precisely, and small differences in beliefs of agents about these distributions may alter their behavior under the prescribed mechanism. We propose designing elicitation mechanisms in a manner that will be robust to small changes in belief. We show how to algorithmically design such mechanisms in polynomial time using tools of stochastic programming and convex programming, and discuss implementation issues for multiagent scenarios.
    corecore