4 research outputs found

    An Incentive Compatible Multi-Armed-Bandit Crowdsourcing Mechanism with Quality Assurance

    Full text link
    Consider a requester who wishes to crowdsource a series of identical binary labeling tasks to a pool of workers so as to achieve an assured accuracy for each task, in a cost optimal way. The workers are heterogeneous with unknown but fixed qualities and their costs are private. The problem is to select for each task an optimal subset of workers so that the outcome obtained from the selected workers guarantees a target accuracy level. The problem is a challenging one even in a non strategic setting since the accuracy of aggregated label depends on unknown qualities. We develop a novel multi-armed bandit (MAB) mechanism for solving this problem. First, we propose a framework, Assured Accuracy Bandit (AAB), which leads to an MAB algorithm, Constrained Confidence Bound for a Non Strategic setting (CCB-NS). We derive an upper bound on the number of time steps the algorithm chooses a sub-optimal set that depends on the target accuracy level and true qualities. A more challenging situation arises when the requester not only has to learn the qualities of the workers but also elicit their true costs. We modify the CCB-NS algorithm to obtain an adaptive exploration separated algorithm which we call { \em Constrained Confidence Bound for a Strategic setting (CCB-S)}. CCB-S algorithm produces an ex-post monotone allocation rule and thus can be transformed into an ex-post incentive compatible and ex-post individually rational mechanism that learns the qualities of the workers and guarantees a given target accuracy level in a cost optimal way. We provide a lower bound on the number of times any algorithm should select a sub-optimal set and we see that the lower bound matches our upper bound upto a constant factor. We provide insights on the practical implementation of this framework through an illustrative example and we show the efficacy of our algorithms through simulations

    The Four Pillars of Crowdsourcing: A Reference Model

    Get PDF
    Crowdsourcing is an emerging business model where tasks are accomplished by the general public; the crowd. Crowdsourcing has been used in a variety of disciplines, including information systems development, marketing and operationalization. It has been shown to be a successful model in recommendation systems, multimedia design and evaluation, database design, and search engine evaluation. Despite the increasing academic and industrial interest in crowdsourcing,there is still a high degree of diversity in the interpretation and the application of the concept. This paper analyses the literature and deduces a taxonomy of crowdsourcing. The taxonomy is meant to represent the different configurations of crowdsourcing in its main four pillars: the crowdsourcer, the crowd, the crowdsourced task and the crowdsourcing platform. Our outcome will help researchers and developers as a reference model to concretely and precisely state their particular interpretation and configuration of crowdsourcing

    Winner-Take-All Crowdsourcing Contests with Stochastic Production

    No full text
    We study winner-take-all contests for crowdsourcing procurement in a model of costly effort and stochastic production. The principal announces a prize value P, agents simultaneously select a level of costly effort to exert towards production, yielding stochastic quality results, and then the agent who produces the highest quality good is paid P by the principal. We derive conditions on the probabilistic mapping from effort to quality under which this contest paradigm yields efficient equilibrium outcomes, and demonstrate that the conditions are satisfied in a range of canonical settings
    corecore