44 research outputs found

    It's getting crowded! : improving the effectiveness of microtask crowdsourcing

    Get PDF
    [no abstract

    In What Mood Are You Today?

    Get PDF
    The mood of individuals in the workplace has been well-studied due to its influence on task performance, and work engagement. However, the effect of mood has not been studied in detail in the context of microtask crowdsourcing. In this paper, we investigate the influence of one's mood, a fundamental psychosomatic dimension of a worker's behaviour, on their interaction with tasks, task performance and perceived engagement. To this end, we conducted two comprehensive studies; (i) a survey exploring the perception of crowd workers regarding the role of mood in shaping their work, and (ii) an experimental study to measure and analyze the actual impact of workers' moods in information findings microtasks. We found evidence of the impact of mood on a worker's perceived engagement through the feeling of reward or accomplishment, and we argue as to why the same impact is not perceived in the evaluation of task performance. Our findings have broad implications on the design and workflow of crowdsourcing systems

    A checklist to combat cognitive biases in crowdsourcing

    Full text link

    Beyond AMT: An Analysis of Crowd Work Platforms

    Get PDF
    While many competitor platforms to Amazon’s Mechanical Turk (AMT) now exist, little research has considered them. Such near-exclusive focus on AMT risks its particular vagaries and limitations overly shaping our understanding of crowd work and our field’s research questions and directions. To address this, we present a qualitative content analysis of seven alternative platforms. After organizing prior AMT studies around a set of key problem types encountered, we define our process for inducing categories for qualitative assessment of platforms. We then contrast the key problem types with AMT vs. platform features from content analysis, informing both methodology of use and directions for future research. Our cross-platform analysis represents the only such study by researchers for researchers, intended to enrich diversity of research on crowd work and accelerate progress.ye

    Revolutionizing Crowdworking Campaigns: Conquering Adverse Selection and Moral Hazard with the Help of Smart Contracts

    Get PDF
    Crowdworking is increasingly being applied by companies to outsource tasks beyond their core competencies flexibly and cost-effectively to an unknown group. However, the anonymous and financially incentivized nature of crowdworkers creates information asymmetries and conflicts of interest, leading to inefficiencies and intensifying the principal-agent problem. Our paper offers a solution to the widespread problem of inefficient Crowdworking campaigns. We first derive the currently applied Crowdworking campaign process based on a qualitative study. Subsequently, we identify the broadest adverse selection and moral hazard problems in the process. We then analyze how the blockchain application of smart contracts can counteract those challenges and develop a process model that maps a Crowdworking campaign using smart contracts. We explain how our developed process significantly reduces adverse selection and moral hazard at each stage. Thus, our research provides approaches to make online labor more attractive and transparent for companies and online workers

    When in doubt ask the crowd : leveraging collective intelligence for improving event detection and machine learning

    Get PDF
    [no abstract

    Design and Evaluation of Crowd-sourcing Platforms Based on Users Confidence Judgments

    Full text link
    Crowd-sourcing deals with solving problems by assigning them to a large number of non-experts called crowd using their spare time. In these systems, the final answer to the question is determined by summing up the votes obtained from the community. The popularity of using these systems has increased by facilitation of access to community members through mobile phones and the Internet. One of the issues raised in crowd-sourcing is how to choose people and how to collect answers. Usually, the separation of users is done based on their performance in a pre-test. Designing the pre-test for performance calculation is challenging; The pre-test questions should be chosen in a way that they test the characteristics in people related to the main questions. One of the ways to increase the accuracy of crowd-sourcing systems is to pay attention to people's cognitive characteristics and decision-making model to form a crowd and improve the estimation of the accuracy of their answers to questions. People can estimate the correctness of their responses while making a decision. The accuracy of this estimate is determined by a quantity called metacognition ability. Metacoginition is referred to the case where the confidence level is considered along with the answer to increase the accuracy of the solution. In this paper, by both mathematical and experimental analysis, we would answer the following question: Is it possible to improve the performance of the crowd-sourcing system by knowing the metacognition of individuals and recording and using the users' confidence in their answers
    corecore