678 research outputs found

    A Full Probabilistic Model for Yes/No Type Crowdsourcing in Multi-Class Classification

    Full text link
    Crowdsourcing has become widely used in supervised scenarios where training sets are scarce and difficult to obtain. Most crowdsourcing models in the literature assume labelers can provide answers to full questions. In classification contexts, full questions require a labeler to discern among all possible classes. Unfortunately, discernment is not always easy in realistic scenarios. Labelers may not be experts in differentiating all classes. In this work, we provide a full probabilistic model for a shorter type of queries. Our shorter queries only require "yes" or "no" responses. Our model estimates a joint posterior distribution of matrices related to labelers' confusions and the posterior probability of the class of every object. We developed an approximate inference approach, using Monte Carlo Sampling and Black Box Variational Inference, which provides the derivation of the necessary gradients. We built two realistic crowdsourcing scenarios to test our model. The first scenario queries for irregular astronomical time-series. The second scenario relies on the image classification of animals. We achieved results that are comparable with those of full query crowdsourcing. Furthermore, we show that modeling labelers' failures plays an important role in estimating true classes. Finally, we provide the community with two real datasets obtained from our crowdsourcing experiments. All our code is publicly available.Comment: SIAM International Conference on Data Mining (SDM19), 9 official pages, 5 supplementary page

    A data-driven game theoretic strategy for developers in software crowdsourcing: a case study

    Get PDF
    Crowdsourcing has the advantages of being cost-effective and saving time, which is a typical embodiment of collective wisdom and community workers’ collaborative development. However, this development paradigm of software crowdsourcing has not been used widely. A very important reason is that requesters have limited knowledge about crowd workers’ professional skills and qualities. Another reason is that the crowd workers in the competition cannot get the appropriate reward, which affects their motivation. To solve this problem, this paper proposes a method of maximizing reward based on the crowdsourcing ability of workers, they can choose tasks according to their own abilities to obtain appropriate bonuses. Our method includes two steps: Firstly, it puts forward a method to evaluate the crowd workers’ ability, then it analyzes the intensity of competition for tasks at Topcoder.com—an open community crowdsourcing platform—on the basis of the workers’ crowdsourcing ability; secondly, it follows dynamic programming ideas and builds game models under complete information in different cases, offering a strategy of reward maximization for workers by solving a mixed-strategy Nash equilibrium. This paper employs crowdsourcing data from Topcoder.com to carry out experiments. The experimental results show that the distribution of workers’ crowdsourcing ability is uneven, and to some extent it can show the activity degree of crowdsourcing tasks. Meanwhile, according to the strategy of reward maximization, a crowd worker can get the theoretically maximum reward

    Comparing Strategies for Winning Expert-rated and Crowd-rated Crowdsourcing Contests: First Findings

    Get PDF
    Many studies have been done on expert-rated crowdsourcing contests but few have examined crowd-rated contests in which winners are determined by the voting of the crowd. Due to the different rating mechanisms, determinants for winning may be different under two types of contests. Based on previous studies, we identify three types of winning determinants: expertise, submission timing, and social capital. Our initial investigation, based on 91 entries of two contests in Zooppa, supports that those variables play different roles in winning crowd-rated contests than in winning expert-rated contests. Specifically, past winning experience in crowd-rated contests predicts future success in crowd-rated contests, while past winning experience in expert-rated contests predicts future success in expert-rated contests. We discover a U-shaped relationship between the submission time and winning in both types of contests. Social capital elevates the probability of winning a crowd-rated contest only if the social capital is sufficiently high

    Collaboration among Crowdsourcees: Towards a Design Theory for Collaboration Process Design

    Get PDF
    Crowdsourcing is used for collaborative problem solving in different domains. The key to optimal solutions is mostly found by collaboration among the crowdsourcees. The current state of research on this field addresses this topic mainly with an explorative focus on a specific domain, such as idea contests. We gather and analyze the contributions from the different domains on collaboration in crowdsourcing. We present a framework for a general collaboration process model for crowdsourcing. To derive this framework, we conducted a literature review and set up a database, which assigns the literature to the process steps that we identified from interaction patterns in the literature. The framework considers phases before and after the collaboration among crowdsourcees and includes relevant activities that can influence the collaboration process. This paper contributes to a deeper understanding of the interaction among crowdsourcees and provides crowdsourcers with grounding for the informed design of effective collaborative crowdsourcing processes.

    Understanding Crowdsourcing Contest Fitness Strategic Decision Factors and Performance: An Expectation-Confirmation Theory Perspective

    Get PDF
    Contest-based intermediary crowdsourcing represents a powerful new business model for generating ideas or solutions by engaging the crowd through an online competition. Prior research has examined motivating factors such as increased monetary reward or demotivating factors such as project requirement ambiguity. However, problematic issues related to crowd contest fitness have received little attention, particularly with regard to crowd strategic decision-making and contest outcomes that are critical for success of crowdsourcing platforms as well as implementation of crowdsourcing models in organizations. Using Expectation-Confirmation Theory (ECT), we take a different approach that focuses on contest level outcomes by developing a model to explain contest duration and performance. We postulate these contest outcomes are a function of managing crowdsourcing participant contest-fitness expectations and disconfirmation, particularly during the bidding process. Our empirical results show that contest fitness expectations and disconfirmation have an overall positive effect on contest performance. This study contributes to theory by demonstrating the adaptability of ECT literature to the online crowdsourcing domain at the level of the project contest. For practice, important insights regarding strategic decision making and understanding how crowd contest-fitness are observed for enhancing outcomes related to platform viability and successful organizational implementation

    Divergent Innovation: Directing the Wisdom of Crowd to Tackle Societal Challenges

    Get PDF
    Crowdsourcing is acknowledged as a promising avenue for addressing societal challenges by drawing on the wisdom of the crowd to offer diverse solutions to complex problems. Advancing a new conceptual framework of ‘divergent innovation’ which delineates between topic and quality divergence as focal metrics of performance when crowdsourcing for solutions to societal challenges, this study investigates the impacts of four ideation stimuli on divergent innovation. These four stimuli include task description concreteness, resource richness, topic entropy, and judging criteria comprehensiveness. Empirical analysis based on data sourced from an online crowd-ideation platform reveals that task description concreteness negatively affects topic divergence but positively influences quality divergence, whereas resource richness positively affects topic divergence but negatively influences quality divergence. Additionally, the relationship between topic entropy and topic divergence is U-shaped, with no significant impact on quality divergence. These findings contribute to extant literature on crowdsourcing and offer invaluable insights for practitioners
    corecore