407 research outputs found

    Does Money Matter? Motivational Factors for Participation in Paid- and Non-Profit-Crowdsourcing Communities

    Get PDF
    Crowdsourcing, the use of an undefined group of external people to complete tasks for the corporation, gained significantly in importance over the last years. Yet little is known about the factors that motivate participants to join crowdsourcing communities. This paper compares the findings of Kaufmann et al. [1] who conducted a study on MechanicalTurk - a profit oriented software development crowdsourcing platform - with the results of a questionnaire posed to the members of MobileWorks - a non-profit crowdsourcing platform. Findings show that many motivational factors apply consistently whether forprofit or for-fun. However, some factors differ significantly; especially extrinsic factors are of far more importance in for-profit communities. The deeper analysis reveals that society may see a larger trend towards crowdsourcing as mean of employment, as more and more individuals regard it as serious work and reliable source of income

    FEEDBACK AND PERFORMANCE IN CROWD WORK: A REAL EFFORT EXPERIMENT

    Get PDF
    Online labor markets gain momentum: Frequently, requsters post micro-tasks and workers choose which tasks to complete for a payment. In virtual, short-lived, and commonly one-shot labor relations, one challenge is to properly incentivize worker effort and quality of work. We present a real effort experiment on a crowd work platform studying the effect of feedback on worker performance. Rank order tournaments might or might not disclose a worker´s current competitive position. One might expect that feedback on the competitive position spurs competition and, in effect, effort and performance. On the contrary, we find evidence that in rank order tournaments, performance feedback tends to have a negative impact on workers´ performance. This effect is mediated by task completion. Furthermore when playing against strong competitors, feedback makes workers more likely to quit the task altogether and, thus, show lower performance. When the competitors are weak, workers tend to complete the task but with reduced effort. Thus, providing performance feedback might not be advisable in crowd labor markets

    Understanding and improving subjective measures in human-computer interaction

    Get PDF
    In Human-Computer Interaction (HCI), research has shifted from a focus on usability and performance towards the holistic notion of User Experience (UX). Research into UX places special emphasis on concepts from psychology, such as emotion, trust, and motivation. Under this paradigm, elaborate methods to capture the richness and diversity of subjective experiences are needed. Although psychology offers a long-standing tradition of developing self-reported scales, it is currently undergoing radical changes in research and reporting practice. Hence, UX research is facing several challenges, such as the widespread use of ad-hoc questionnaires with unknown or unsatisfactory psychometric properties, or a lack of replication and transparency. Therefore, this thesis contributes to several gaps in the research by developing and validating self-reported scales in the domain of user motivation (manuscript 1), perceived user interface language quality (manuscript 2), and user trust (manuscript 3). Furthermore, issues of online research and practical considerations to ensure data quality are empirically examined (manuscript 4). Overall, this thesis provides well-documented templates for scale development, and may help improve scientific rigor in HCI

    TRACE: A Stigmergic Crowdsourcing Platform for Intelligence Analysis

    Get PDF
    Crowdsourcing has become a frequently adopted approach to solving various tasks from conducting surveys to designing products. In the field of reasoning-support, however, crowdsourcing-related research and application have not been extensively implemented. Reasoning-support is essential in intelligence analysis to help analysts mitigate various cognitive biases, enhance deliberation, and improve report writing. In this paper, we propose a novel approach to designing a crowdsourcing platform that facilitates stigmergic coordination, awareness, and communication for intelligence analysis. We have partly materialized our proposal in the form of a crowdsourcing system which supports intelligence analysis: TRACE (Trackable Reasoning and Analysis for Collaboration and Evaluation). We introduce several stigmergic approaches integrated into TRACE and discuss the potential experimentation of these approaches. We also explain the design implications for further development of TRACE and similar crowdsourcing systems to support reasoning
    • …
    corecore