24 research outputs found

    Human Beyond the Machine: Challenges and Opportunities of Microtask Crowdsourcing

    Get PDF
    In the 21st century, where automated systems and artificial intelligence are replacing arduous manual labor by supporting data-intensive tasks, many problems still require human intelligence. Over the last decade, by tapping into human intelligence through microtasks, crowdsourcing has found remarkable applications in a wide range of domains. In this article, the authors discuss the growth of crowdsourcing systems since the term was coined by columnist Jeff Howe in 2006. They shed light on the evolution of crowdsourced microtasks in recent times. Next, they discuss a main challenge that hinders the quality of crowdsourced results: the prevalence of malicious behavior. They reflect on crowdsourcing's advantages and disadvantages. Finally, they leave the reader with interesting avenues for future research

    TRACE: A Stigmergic Crowdsourcing Platform for Intelligence Analysis

    Get PDF
    Crowdsourcing has become a frequently adopted approach to solving various tasks from conducting surveys to designing products. In the field of reasoning-support, however, crowdsourcing-related research and application have not been extensively implemented. Reasoning-support is essential in intelligence analysis to help analysts mitigate various cognitive biases, enhance deliberation, and improve report writing. In this paper, we propose a novel approach to designing a crowdsourcing platform that facilitates stigmergic coordination, awareness, and communication for intelligence analysis. We have partly materialized our proposal in the form of a crowdsourcing system which supports intelligence analysis: TRACE (Trackable Reasoning and Analysis for Collaboration and Evaluation). We introduce several stigmergic approaches integrated into TRACE and discuss the potential experimentation of these approaches. We also explain the design implications for further development of TRACE and similar crowdsourcing systems to support reasoning

    How well did I do? The effect of feedback on affective commitment in the context of microwork

    Get PDF
    Crowdwork is a relatively new form of platform-mediated and paid online work that creates different types of relationships between all parties involved. This paper focuses on the crowdworker-requester relationship and investigates how the option of receiving feedback impacts the affective commitment of microworkers. An online vignette experiment (N= 145) on a German crowdworking platform was conducted. We found that the integration of feedback options within the task description influences the affective commitment positively toward the requester as well as the perceived requester attractiveness

    CrowdCE: A Collaboration Model for Crowdsourcing Software with Computing Elements

    Get PDF
    Today’s crowd computing models are mainly used for handling independent tasks with simplistic collaboration and coordination through business workflows. However, the software development processes are complex, intellectually and organizationally challenging business models. We present a model for software development that addresses key challenges. It is designed for the crowd in the development of a social application. Our model presents an approach to structurally decompose the overall computing element into atomic machine-based computing elements and human-based computing elements such that the elements can complement each other independently and socially by the crowd. We evaluate our approach by developing a business application through crowd work. We compare our model with the traditional software development models. The primary result was completed well for empowering the crowd

    An experimental characterization of workers'' behavior and accuracy in crowdsourced tasks

    Get PDF
    Crowdsourcing systems are evolving into a powerful tool of choice to deal with repetitive or lengthy human-based tasks. Prominent among those is Amazon Mechanical Turk, in which Human Intelligence Tasks, are posted by requesters, and afterwards selected and executed by subscribed (human) workers in the platform. Many times these HITs serve for research purposes. In this context, a very important question is how reliable the results obtained through these platforms are, in view of the limited control a requester has on the workers'' actions. Various control techniques are currently proposed but they are not free from shortcomings, and their use must be accompanied by a deeper understanding of the workers'' behavior. In this work, we attempt to interpret the workers'' behavior and reliability level in the absence of control techniques. To do so, we perform a series of experiments with 600 distinct MTurk workers, specifically designed to elicit the worker''s level of dedication to a task, according to the task''s nature and difficulty. We show that the time required by a worker to carry out a task correlates with its difficulty, and also with the quality of the outcome. We find that there are different types of workers. While some of them are willing to invest a significant amount of time to arrive at the correct answer, at the same time we observe a significant fraction of workers that reply with a wrong answer. For the latter, the difficulty of the task and the very short time they took to reply suggest that they, intentionally, did not even attempt to solve the task. © 2021 Christoforou et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited

    An experimental characterization of workers" behavior and accuracy in crowdsourced tasks

    Get PDF
    Crowdsourcing systems are evolving into a powerful tool of choice to deal with repetitive or lengthy human-based tasks. Prominent among those is Amazon Mechanical Turk, in which Human Intelligence Tasks, are posted by requesters, and afterwards selected and executed by subscribed (human) workers in the platform. Many times these HITs serve for research purposes. In this context, a very important question is how reliable the results obtained through these platforms are, in view of the limited control a requester has on the workers’ actions. Various control techniques are currently proposed but they are not free from shortcomings, and their use must be accompanied by a deeper understanding of the workers’ behavior. In this work, we attempt to interpret the workers’ behavior and reliability level in the absence of control techniques. To do so, we perform a series of experiments with 600 distinct MTurk workers, specifically designed to elicit the worker’s level of dedication to a task, according to the task’s nature and difficulty. We show that the time required by a worker to carry out a task correlates with its difficulty, and also with the quality of the outcome. We find that there are different types of workers. While some of them are willing to invest a significant amount of time to arrive at the correct answer, at the same time we observe a significant fraction of workers that reply with a wrong answer. For the latter, the difficulty of the task and the very short time they took to reply suggest that they, intentionally, did not even attempt to solve the task.AS was supported in part by grants PGC2018-098186-B-I00 (BASIC, FEDER/MICINN- AEI, https://www.ciencia.gob.es/portal/site/MICINN/aei), PRACTICO-CM (Comunidad de Madrid, https://www.comunidad.madrid/servicios/educacion/convocatorias-ayudas-investigacion), and CAVTIONS-CM-UC3M (Comunidad de Madrid/Universidad Carlos III de Madrid, https://www.comunidad.madrid/servicios/educacion/convocatorias-ayudas-investigacion). AFA was supported by the Regional Government of Madrid (CM) grant 347 EdgeData-CM (P2018/TCS4499) cofounded by FSE & FEDER (https://www.comunidad.madrid/servicios/educacion/convocatorias-ayudas-investigacion), NSF of China grant 61520106005 (http://www.nsfc.gov.cn/english/site_1/index.html) and the Ministry of Science and Innovation (https://www.ciencia.gob.es/portal/site/MICINN/aei) grant PID2019-109805RB-I00 (ECID) cofounded by FEDER.Publicad

    Current state of crowdsourcing taxonomy research: A systematic review

    Get PDF
    In this study, a systematic review was performed to identify the current state of crowdsourcing classification or taxonomy research to date. A total of 23 studies was found, which were categorised into general classification and specific classification, where specific classification was further divided into classification of processes, tasks and crowd. From these studies, a total of 21 attributes used in classifying crowdsourcing initiatives were found, which were categorised into seven themes as a result of constant comparison analysis.The seven themes are crowdsourcer, crowd, task, process, platform, content and reward. Expert evaluation involving five independent researchers in the area was used to validate the themes and the categorisation of the 21 attributes into the seven themes. Evaluation results showed that the independent researchers unanimously agreed on the seven themes and on the assignments made, after slight improvement on the latter
    corecore