493 research outputs found

    Efficient and adaptive incentive selection for crowdsourcing contests

    Get PDF
    The success of crowdsourcing projects relies critically on motivating a crowd to contribute. One particularly effective method for incentivising participants to perform tasks is to run contests where participants compete against each other for rewards. However, there are numerous ways to implement such contests in specific projects, that vary in how performance is evaluated, how participants are rewarded, and the sizes of the prizes. Also, the best way to implement contests in a particular project is still an open challenge, as the effectiveness of each contest implementation (henceforth, incentive) is unknown in advance. Hence, in a crowdsourcing project, a practical approach to maximise the overall utility of the requester (which can be measured by the total number of completed tasks or the quality of the task submissions) is to choose a set of incentives suggested by previous studies from the literature or from the requester’s experience. Then, an effective mechanism can be applied to automatically select appropriate incentives from this set over different time intervals so as to maximise the cumulative utility within a given financial budget and a time limit. To this end, we present a novel approach to this incentive selection problem. Specifically, we formalise it as an online decision making problem, where each action corresponds to offering a specific incentive. After that, we detail and evaluate a novel algorithm, HAIS, to solve the incentive selection problem efficiently and adaptively. In theory, in the case that all the estimates in HAIS (except the estimates of the effectiveness of each incentive) are correct, we show that the algorithm achieves the regret bound o

    What prize is right? How to learn the optimal structure for crowdsourcing contests

    Get PDF
    In crowdsourcing, one effective method for encouraging par-ticipants to perform tasks is to run contests where participants compete against each other for rewards. However, there are numerous ways to implement such contests in specific projects. They could vary in their structure (e.g., performance evaluation and the number of prizes) and parameters (e.g., the maximum number of participants and the amount of prize money). Additionally, with a given budget and a time limit, choosing incentives (i.e., contest structures with specific parameter values) that maximise the overall utility is not trivial, as their respective effectiveness in a specific project is usually unknown a priori. Thus, in this paper, we propose a novel algorithm, BOIS (Bayesian-optimisation-based incentive selection), to learn the optimal structure and tune its parameters effectively. In detail, the learning and tuning problems are solved simultaneously by using online learning in combination with Bayesian optimisation. The results of our extensive simulations show that the performance of our algorithm is up to 85% of the optimal and up to 63% better than state-of-the-art benchmarks

    An Incentive Compatible Multi-Armed-Bandit Crowdsourcing Mechanism with Quality Assurance

    Full text link
    Consider a requester who wishes to crowdsource a series of identical binary labeling tasks to a pool of workers so as to achieve an assured accuracy for each task, in a cost optimal way. The workers are heterogeneous with unknown but fixed qualities and their costs are private. The problem is to select for each task an optimal subset of workers so that the outcome obtained from the selected workers guarantees a target accuracy level. The problem is a challenging one even in a non strategic setting since the accuracy of aggregated label depends on unknown qualities. We develop a novel multi-armed bandit (MAB) mechanism for solving this problem. First, we propose a framework, Assured Accuracy Bandit (AAB), which leads to an MAB algorithm, Constrained Confidence Bound for a Non Strategic setting (CCB-NS). We derive an upper bound on the number of time steps the algorithm chooses a sub-optimal set that depends on the target accuracy level and true qualities. A more challenging situation arises when the requester not only has to learn the qualities of the workers but also elicit their true costs. We modify the CCB-NS algorithm to obtain an adaptive exploration separated algorithm which we call { \em Constrained Confidence Bound for a Strategic setting (CCB-S)}. CCB-S algorithm produces an ex-post monotone allocation rule and thus can be transformed into an ex-post incentive compatible and ex-post individually rational mechanism that learns the qualities of the workers and guarantees a given target accuracy level in a cost optimal way. We provide a lower bound on the number of times any algorithm should select a sub-optimal set and we see that the lower bound matches our upper bound upto a constant factor. We provide insights on the practical implementation of this framework through an illustrative example and we show the efficacy of our algorithms through simulations

    Optimum Statistical Estimation with Strategic Data Sources

    Full text link
    We propose an optimum mechanism for providing monetary incentives to the data sources of a statistical estimator such as linear regression, so that high quality data is provided at low cost, in the sense that the sum of payments and estimation error is minimized. The mechanism applies to a broad range of estimators, including linear and polynomial regression, kernel regression, and, under some additional assumptions, ridge regression. It also generalizes to several objectives, including minimizing estimation error subject to budget constraints. Besides our concrete results for regression problems, we contribute a mechanism design framework through which to design and analyze statistical estimators whose examples are supplied by workers with cost for labeling said examples

    Incentive Mechanism Design for Crowdsourcing: An All-pay Auction Approach

    Get PDF
    Crowdsourcing can be modeled as a principal-agent problem in which the principal (crowdsourcer) desires to solicit a maximal contribution from a group of agents (participants) while agents are only motivated to act according to their own respective advantages. To reconcile this tension, we propose an all-pay auction approach to incentivize agents to act in the principal\u27s interest, i.e., maximizing profit, while allowing agents to reap strictly positive utility. Our rationale for advocating all-pay auctions is based on two merits that we identify, namely all-pay auctions (i) compress the common, two-stage bid-contribute crowdsourcing process into a single bid-cum-contribute stage, and (ii) eliminate the risk of task nonfulfillment. In our proposed approach, we enhance all-pay auctions with two additional features: an adaptive prize and a general crowdsourcing environment. The prize or reward adapts itself as per a function of the unknown winning agent\u27s contribution, and the environment or setting generally accommodates incomplete and asymmetric information, risk-averse (and risk-neutral) agents, and a stochastic (and deterministic) population. We analytically derive this all-pay auction-based mechanism and extensively evaluate it in comparison to classic and optimized mechanisms. The results demonstrate that our proposed approach remarkably outperforms its counterparts in terms of the principal\u27s profit, agent\u27s utility, and social welfare

    The Four Pillars of Crowdsourcing: A Reference Model

    Get PDF
    Crowdsourcing is an emerging business model where tasks are accomplished by the general public; the crowd. Crowdsourcing has been used in a variety of disciplines, including information systems development, marketing and operationalization. It has been shown to be a successful model in recommendation systems, multimedia design and evaluation, database design, and search engine evaluation. Despite the increasing academic and industrial interest in crowdsourcing,there is still a high degree of diversity in the interpretation and the application of the concept. This paper analyses the literature and deduces a taxonomy of crowdsourcing. The taxonomy is meant to represent the different configurations of crowdsourcing in its main four pillars: the crowdsourcer, the crowd, the crowdsourced task and the crowdsourcing platform. Our outcome will help researchers and developers as a reference model to concretely and precisely state their particular interpretation and configuration of crowdsourcing
    • …
    corecore