18,197 research outputs found

    Factors Influencing the Participation of Crowdsourcing Solvers: Benefit or Cost

    Get PDF
    Crowdsourcing has become a new channel for companies and organizations to collect the wisdom of crowds and reach business objectives. How to effectively motivate user participation and improve the quality of solutions has become an important issue that needs to be addressed in crowdsourcing research. While the influence of benefit factors on user participation has been widely tested, understanding of cost factors is still insufficient in extant literature. Based on social exchange theory, this paper proposed a research model to explain the impacts of benefit and cost factors on solver participation behavior as well as the moderating role of task complexity in crowdsourcing. The model will be tested using data from an online translation crowdsourcing task where solvers were invited to participate in the translation and fill out the questionnaire. This paper explores the differences in the factors which affect solvers participation intention and the quality of solutions. In addition, the role of task complexity can be found out by designing translation tasks of different task complexity and randomly assigning solvers to different tasks

    Crowdsourcing complex workflows under budget constraints

    Get PDF
    We consider the problem of task allocation in crowdsourcing systems with multiple complex workflows, each of which consists of a set of interdependent micro-tasks. We propose Budgeteer, an algorithm to solve this problem under a budget constraint. In particular, our algorithm first calculates an efficient way to allocate budget to each workflow. It then determines the number of inter-dependent micro-tasks and the price to pay for each task within each workflow, given the corresponding budget constraints. We empirically evaluate it on a well-known crowdsourcing-based text correction workflow using Amazon Mechanical Turk, and show that Budgeteer can achieve similar levels of accuracy to current benchmarks, but is on average 45% cheaper

    A data-driven game theoretic strategy for developers in software crowdsourcing: a case study

    Get PDF
    Crowdsourcing has the advantages of being cost-effective and saving time, which is a typical embodiment of collective wisdom and community workers’ collaborative development. However, this development paradigm of software crowdsourcing has not been used widely. A very important reason is that requesters have limited knowledge about crowd workers’ professional skills and qualities. Another reason is that the crowd workers in the competition cannot get the appropriate reward, which affects their motivation. To solve this problem, this paper proposes a method of maximizing reward based on the crowdsourcing ability of workers, they can choose tasks according to their own abilities to obtain appropriate bonuses. Our method includes two steps: Firstly, it puts forward a method to evaluate the crowd workers’ ability, then it analyzes the intensity of competition for tasks at Topcoder.com—an open community crowdsourcing platform—on the basis of the workers’ crowdsourcing ability; secondly, it follows dynamic programming ideas and builds game models under complete information in different cases, offering a strategy of reward maximization for workers by solving a mixed-strategy Nash equilibrium. This paper employs crowdsourcing data from Topcoder.com to carry out experiments. The experimental results show that the distribution of workers’ crowdsourcing ability is uneven, and to some extent it can show the activity degree of crowdsourcing tasks. Meanwhile, according to the strategy of reward maximization, a crowd worker can get the theoretically maximum reward

    T-Crowd: Effective Crowdsourcing for Tabular Data

    Full text link
    Crowdsourcing employs human workers to solve computer-hard problems, such as data cleaning, entity resolution, and sentiment analysis. When crowdsourcing tabular data, e.g., the attribute values of an entity set, a worker's answers on the different attributes (e.g., the nationality and age of a celebrity star) are often treated independently. This assumption is not always true and can lead to suboptimal crowdsourcing performance. In this paper, we present the T-Crowd system, which takes into consideration the intricate relationships among tasks, in order to converge faster to their true values. Particularly, T-Crowd integrates each worker's answers on different attributes to effectively learn his/her trustworthiness and the true data values. The attribute relationship information is also used to guide task allocation to workers. Finally, T-Crowd seamlessly supports categorical and continuous attributes, which are the two main datatypes found in typical databases. Our extensive experiments on real and synthetic datasets show that T-Crowd outperforms state-of-the-art methods in terms of truth inference and reducing the cost of crowdsourcing
    corecore