3 research outputs found

    Characterising Volunteers' Task Execution Patterns Across Projects on Multi-Project Citizen Science Platforms

    Full text link
    Citizen science projects engage people in activities that are part of a scientific research effort. On multi-project citizen science platforms, scientists can create projects consisting of tasks. Volunteers, in turn, participate in executing the project's tasks. Such type of platforms seeks to connect volunteers and scientists' projects, adding value to both. However, little is known about volunteer's cross-project engagement patterns and the benefits of such patterns for scientists and volunteers. This work proposes a Goal, Question, and Metric (GQM) approach to analyse volunteers' cross-project task execution patterns and employs the Semiotic Inspection Method (SIM) to analyse the communicability of the platform's cross-project features. In doing so, it investigates what are the features of platforms to foster volunteers' cross-project engagement, to what extent multi-project platforms facilitate the attraction of volunteers to perform tasks in new projects, and to what extent multi-project participation increases engagement on the platforms. Results from analyses on real platforms show that volunteers tend to explore multiple projects, but they perform tasks regularly in just a few of them; few projects attract much attention from volunteers; volunteers recruited from other projects on the platform tend to get more engaged than those recruited outside the platform. System inspection shows that platforms still lack personalised and explainable recommendations of projects and tasks. The findings are translated into useful claims about how to design and manage multi-project platforms.Comment: XVIII Brazilian Symposium on Human Factors in Computing Systems (IHC'19), October 21-25, 2019, Vit\'oria, ES, Brazi

    Understanding the Use of HIT Catchers and Crowd Knowledge Sharing: A Case Study of Crowdworkers on Amazon Mechanical Turk

    Get PDF
    Crowdsourcing platforms have become a vital component of the modern digital economy, offering a wide range of HIT (Human Intelligence Task) opportunities to workers worldwide. Meanwhile, crowdworkers' use of scripting tools and their communication with each other are continuously shaping the entire crowdsourcing ecosystem. This thesis explores the use of HIT catchers by crowdworkers and their sharing of skill-based knowledge that facilitates the popularity of such scripting tools. It is revealed that the use of HIT catchers affects the completion speed and HIT-worker diversity for the whole HIT group, while depriving job opportunities from others. This potentially undermines the stability of the platform under the current reputation system relying on numbers of approvals and approval rates. Subsequently, another study explored how work strategies under the use of HIT catchers, including HIT acceptance, backlog, and completion, affect HIT availability, completion time, and result quality. The study also found differences in work behaviours between workers using and not using HIT catchers. Finally, this thesis investigates the skill-based knowledge sharing behaviour of crowdworkers, which promotes the blooming of scripting tools including HIT catchers, to improve the fairness of work opportunities and mitigate its negative impact on HIT completion. Using PLS-SEM, we assess the factors influencing knowledge sharing in the domain of skills. The study reveals the significance of high performance expectation, low effort expectation, and the joy and satisfaction in motivating the crowd skill-based knowledge sharing. Overall, this study provides an in-depth exploration around these two types of collective behaviour, highlighting the important role of tool use and knowledge sharing in shaping the crowdsourcing ecosystem
    corecore