20,190 research outputs found

    Bandit-Based Task Assignment for Heterogeneous Crowdsourcing

    Full text link
    We consider a task assignment problem in crowdsourcing, which is aimed at collecting as many reliable labels as possible within a limited budget. A challenge in this scenario is how to cope with the diversity of tasks and the task-dependent reliability of workers, e.g., a worker may be good at recognizing the name of sports teams, but not be familiar with cosmetics brands. We refer to this practical setting as heterogeneous crowdsourcing. In this paper, we propose a contextual bandit formulation for task assignment in heterogeneous crowdsourcing, which is able to deal with the exploration-exploitation trade-off in worker selection. We also theoretically investigate the regret bounds for the proposed method, and demonstrate its practical usefulness experimentally

    Crowdsourced PAC Learning under Classification Noise

    Full text link
    In this paper, we analyze PAC learnability from labels produced by crowdsourcing. In our setting, unlabeled examples are drawn from a distribution and labels are crowdsourced from workers who operate under classification noise, each with their own noise parameter. We develop an end-to-end crowdsourced PAC learning algorithm that takes unlabeled data points as input and outputs a trained classifier. Our three-step algorithm incorporates majority voting, pure-exploration bandits, and noisy-PAC learning. We prove several guarantees on the number of tasks labeled by workers for PAC learning in this setting and show that our algorithm improves upon the baseline by reducing the total number of tasks given to workers. We demonstrate the robustness of our algorithm by exploring its application to additional realistic crowdsourcing settings.Comment: 14 page

    Achieving Success in Community Crowdsourcing: Lessons from the Field

    Get PDF
    Community crowdsourcing is a relatively new phenomenon where local institutions, such as cities and neighborhoods, invite citizens to engage in a public discussion and solve problems that directly affect them. While community crowdsourcing has been around for over a decade, relatively little is known about what drives the success of these initiatives. In this exploratory study, we analyze field data from over 1,000 community crowdsourcing projects that were hosted on a professional community crowdsourcing platform. Our exploration reveals interesting insights into characteristics of community crowdsourcing projects that are associated with higher levels of user engagement. These insights allow us to speculate on guidelines to organize and execute community crowdsourcing initiatives

    Crowdsourcing complex workflows under budget constraints

    Get PDF
    We consider the problem of task allocation in crowdsourcing systems with multiple complex workflows, each of which consists of a set of interdependent micro-tasks. We propose Budgeteer, an algorithm to solve this problem under a budget constraint. In particular, our algorithm first calculates an efficient way to allocate budget to each workflow. It then determines the number of inter-dependent micro-tasks and the price to pay for each task within each workflow, given the corresponding budget constraints. We empirically evaluate it on a well-known crowdsourcing-based text correction workflow using Amazon Mechanical Turk, and show that Budgeteer can achieve similar levels of accuracy to current benchmarks, but is on average 45% cheaper
    corecore