121 research outputs found

    Gamification in crowdsourcing: A review

    Get PDF

    Gamification in Crowdsourcing: A Review

    Get PDF
    This study investigates how different gamification implementations can increase crowdsourcees' motivation and participation in crowdsourcing (CS). To this end, we review empirical literature that has investigated the use of gamification in crowdsourcing settings. Overall, the results of the review indicate that gamification has been an effective approach for increasing crowdsourcing participation. When comparing crowdcreating,-solving,-processing and-rating CS approaches, the results show differences in the use of gamification across CS types. Crowdsourcing initiatives that provide more monotonous tasks most commonly used mere points and other simpler gamification implementations, whereas CS initiatives that seek for diverse and creative contributions have employed gamification in more manifold ways employing a richer set of mechanics. These findings provide insights for designers of gamified systems and further research on the topics of gamification and crowdsourcing

    Tailoring a Points Scoring Mechanism for Crowd-Based Knowledge Pooling

    Get PDF
    We address the design of point scoring mechanisms in games for crowds, to promote user motivations to contribute knowledge. We measure the effectiveness of the scoring mechanism on users’ performance across three types of crowd: general public, students in their field of study, general students. The conditions were: reward-free games (control group) and two reward-based systems differing in the algorithm applied (linear y=3x vs. exponential y=6ex). Results support the importance of the mathematical function of scores assignment as a motivator for knowledge contribution, and indicate that the effect of the scoring mechanism design should be tailored according to the type of crowd. These findings provide insights for designers of gamified systems on how to improve knowledge contributions in crowd-based systems

    Analyzing crowd workers' learning behavior to obtain more reliable labels

    Get PDF
    Crowdsourcing is a popular means to obtain high-quality labels for datasets at moderate costs. These crowdsourced datasets are then used for training supervised or semisupervised predictors. This implies that the performance of the resulting predictors depends on the quality/reliability of the labels that crowd workers assigned – low reliability usually leads to poorly performing predictors. In practice, label reliability in crowdsourced datasets varies substantially depending on multiple factors such as the difficulty of the labeling task at hand, the characteristics and motivation of the participating crowd workers, or the difficulty of the documents to be labeled. Different approaches exist to mitigate the effects of the aforementioned factors, for example by identifying spammers based on their annotation times and removing their submitted labels. To complement existing approaches for improving label reliability in crowdsourcing, this thesis explores label reliability from two perspectives: first, how the label reliability of crowd workers develops over time during an actual labeling task, and second how it is affected by the difficulty of the documents to be labeled. We find that label reliability of crowd workers increases after they labeled a certain number of documents. Motivated by our finding that the label reliability for more difficult documents is lower, we propose a new crowdsourcing methodology to improve label reliability: given an unlabeled dataset to be crowdsourced, we first train a difficulty predictor v on a small seed set and the predictor then estimates the difficulty level in the remaining unlabeled documents. This procedure might be repeated multiple times until the performance of the difficulty predictor is sufficient. Ultimately, difficult documents are separated from the rest, so that only the latter documents are crowdsourced. Our experiments demonstrate the feasibility of this method

    Improving User Involvement Through Live Collaborative Creation

    Full text link
    Creating an artifact - such as writing a book, developing software, or performing a piece of music - is often limited to those with domain-specific experience or training. As a consequence, effectively involving non-expert end users in such creative processes is challenging. This work explores how computational systems can facilitate collaboration, communication, and participation in the context of involving users in the process of creating artifacts while mitigating the challenges inherent to such processes. In particular, the interactive systems presented in this work support live collaborative creation, in which artifact users collaboratively participate in the artifact creation process with creators in real time. In the systems that I have created, I explored liveness, the extent to which the process of creating artifacts and the state of the artifacts are immediately and continuously perceptible, for applications such as programming, writing, music performance, and UI design. Liveness helps preserve natural expressivity, supports real-time communication, and facilitates participation in the creative process. Live collaboration is beneficial for users and creators alike: making the process of creation visible encourages users to engage in the process and better understand the final artifact. Additionally, creators can receive immediate feedback in a continuous, closed loop with users. Through these interactive systems, non-expert participants help create such artifacts as GUI prototypes, software, and musical performances. This dissertation explores three topics: (1) the challenges inherent to collaborative creation in live settings, and computational tools that address them; (2) methods for reducing the barriers of entry to live collaboration; and (3) approaches to preserving liveness in the creative process, affording creators more expressivity in making artifacts and affording users access to information traditionally only available in real-time processes. In this work, I showed that enabling collaborative, expressive, and live interactions in computational systems allow the broader population to take part in various creative practices.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/145810/1/snaglee_1.pd
    corecore