65 research outputs found

    Eliciting Truthful Measurements from a Community of Sensors

    Get PDF
    As the Internet of Things grows to large scale, its components will increasingly be controlled by self-interested agents. For example, sensor networks will evolve to community sensing where a community of agents combine their data into a single coherent structure. As there is no central quality control, agents need to be incentivized to provide accurate measurements. We propose game-theoretic mechanisms that provide such incentives and show their application on the example of community sensing for monitoring air pollution. These mechanisms can be applied to most sensing scenarios and allow the Internet of Things to grow to much larger scale than currently exists

    Mechanisms for making crowds truthful

    Get PDF
    Abstract We consider schemes for obtaining truthful reports on a common but hidden signal from large groups of rational, self-interested agents. One example are online feedback mechanisms, where users provide observations about the quality of a product or service so that other users can have an accurate idea of what quality they can expect. However, (i) providing such feedback is costly, and (ii) there are many motivations for providing incorrect feedback. Both problems can be addressed by reward schemes which (i) cover the cost of obtaining and reporting feedback, and (ii) maximize the expected reward of a rational agent who reports truthfully. We address the design of such incentive-compatible rewards for feedback generated in environments with pure adverse selection. Here, the correlation between the true knowledge of an agent and her beliefs regarding the likelihoods of reports of other agents can be exploited to make honest reporting a Nash equilibrium. In this paper we extend existing methods for designing incentive-compatible rewards by also considering collusion. We analyze different scenarios, where, for example, some or all of the agents collude. For each scenario we investigate whether a collusion-resistant, incentive-compatible reward scheme exists, and use automated mechanism design to specify an algorithm for deriving an efficient reward mechanism

    Incentives for Answering Hypothetical Questions

    Get PDF
    Prediction markets and other reward mechanisms based on proper scoring rules can elicit accurate predictions about future events with public utcomes. However, many questions of public interest do not always have a clear answer. For example, facts such as the eects of raising or lower-ing interest rates can never be publicly veried, since only one option will be implemented. In this paper we address re-porting incentives for opinion polls and uestionnaires about hypothetical questions, where the honesty of one answer can only be assessed in the context of the other answers elicited through the poll. We extend our previous work on this problem by four main results. First, we prove that no reward mechanism can be strictly incentive compatible when the mechanism designer does not know the prior nformation of the participants. Second, we formalize the notion of help- ful reporting which prescribes that rational agents move the public result of the poll towards what they believe to be the true distribution (even when that involves reporting an answer that is not the agent's rst preference). Third, we show that helpful reporting converges the nal result of the poll to the true distribution of opinions. Finally, we present a reward scheme that makes helpful reporting an equilibrium for polls with an arbitrary number of answers. Our mechanism is simple, and does not require information about the prior beliefs of the agents

    Eliciting Truthful Information with the Peer Truth Serum

    Get PDF
    We consider settings where a collective intelligence is formed by aggregating information contributed from many independent agents, such as product reviews, community sensing or opinion polls. To encourage participation and avoid selection bias, agents should be rewarded for the information they provide. It is important the the rewards provide incentives for relevant and truthful information and discourage random or malicious reports. Incentive schemes can be based on the fact that an agent's private information influences its beliefs about what other agents will report, and compute rewards based on comparing an agent's report with that of peer agents. Existing schemes require not only that all agents have the same prior belief, but also that they update these beliefs in an identical way. This assumption is unrealistic as agents may have very different perceptions of the accuracy of their own information. We have investigated a novel method, that we call the Peer Truth Serum (PTS), that works even when agents update their beliefs differently. It requires that the belief update from prior to posterior satisfies a self-predicting condition. It rewards agents with a payment of c/R(s) if their report s matches that of a randomly chosen reference agent, and nothing otherwise. R is the current distribution of reports that is maintained and published by the center collecting the information. We can show that as long as R is within a certain bound from agents' priors Pr, the reward scheme is truthful. Furthermore, as long as Pr is more informed than R, i.e. closer to the true distribution of private information, PTS incentivizes helpful reporting that still guarantees that R converges to the true distribution

    Aggregating Reputation Feedback

    Get PDF
    A fundamental task in reputation systems is to aggregate multiple feedback ratings into a single value that can be used to compare the reputation of different entities. Feedback is most commonly aggregated using the arithmetic mean. However, the mean is quite susceptible to outliers and biases, and thus may not be the most informative aggregate of the reports. We consider three criteria to assess the quality of an aggregator: the informativness, the robustness and the strategyproofness, and analyze how different aggregators, in particular the mean, median and mode, perform with respect to these criteria. The results show that the arithmetic mean may not always be the best choice

    Incentives for Effort in Crowdsourcing using the Peer Truth Serum

    Get PDF
    Crowdsourcing is widely proposed as a method to solve large variety of judgement tasks, such as classifying website content, peer grading in online courses, or collecting real-world data. As the data reported by workers cannot be verified, there is a tendency to report random data without actually solving the task. This can be countered by making the reward for an answer depend on its consistency with answers given by other workers, an approach called {\em peer consistency}. However, it is obvious that the best strategy in such schemes is for all workers to report the same answer without solving the task. Dasgupta and Ghosh (WWW 2013) show that in some cases exerting high effort can be encouraged in the highest-paying equilibrium. In this paper we present a general mechanism that implements this idea and is applicable to most crowdsourcing settings. Furthermore, we experimentally test the novel mechanism, and validate its theoretical properties

    Eliciting Truthful Measurements from a Community of Sensors

    Get PDF
    As the Internet of Things grows to large scale, its components will increasingly be controlled by selfinterested agents. For example, sensor networks will evolve to community sensing where a community of agents combine their data into a single coherent structure. As there is no central quality control, agents need to be incentivized to provide accurate measurements. We propose game-theoretic mechanisms that provide such incentives and show their application on the example of community sensing for monitoring air pollution. These mechanisms can be applied to most sensing scenarios and allow the Internet of Things to grow to much larger scale than currently exists

    RESERVATION MODEL FOR REAL-TIME BIDDING BASED ADVERTISING SYSTEM

    Get PDF
    An advertisement management system can enable an advertiser to switch between realtime bidding and reservation delivery modes. A request is received to transition a campaign from a real-time bidding mode to a reservation mode. A reservation engine determines proposed terms for a reservation contract. An acceptance of the proposed terms is received from the advertiser. The campaign is transitioned from the real-time bidding mode to the reservation mode and a reservation contract is established between the advertisement management system and the advertiser, with campaign parameters made immutable for the duration of the contract
    • …
    corecore