29 research outputs found

    Worker Retention, Response Quality, and Diversity in Microtask Crowdsourcing: An Experimental Investigation of the Potential for Priming Effects to Promote Project Goals

    Get PDF
    Online microtask crowdsourcing platforms act as efficient resources for delegating small units of work, gathering data, generating ideas, and more. Members of research and business communities have incorporated crowdsourcing into problem-solving processes. When human workers contribute to a crowdsourcing task, they are subject to various stimuli as a result of task design. Inter-task priming effects - through which work is nonconsciously, yet significantly, influenced by exposure to certain stimuli - have been shown to affect microtask crowdsourcing responses in a variety of ways. Instead of simply being wary of the potential for priming effects to skew results, task administrators can utilize proven priming procedures in order to promote project goals. In a series of three experiments conducted on Amazon’s Mechanical Turk, we investigated the effects of proposed priming treatments on worker retention, response quality, and response diversity. In our first two experiments, we studied the effect of initial response freedom on sustained worker participation and response quality. We expected that workers who were granted greater levels of freedom in an initial response would be stimulated to complete more work and deliver higher quality work than workers originally constrained in their initial response possibilities. We found no significant relationship between the initial response freedom granted to workers and the amount of optional work they completed. The degree of initial response freedom also did not have a significant impact on subsequent response quality. However, the influence of inter-task effects were evident based on response tendencies for different question types. We found evidence that consistency in task structure may play a stronger role in promoting response quality than proposed priming procedures. In our final experiment, we studied the influence of a group-level priming treatment on response diversity. Instead of varying task structure for different workers, we varied the degree of overlap in question content distributed to different workers in a group. We expected groups of workers that were exposed to more diverse preliminary question sets to offer greater diversity in response to a subsequent question. Although differences in response diversity were revealed, no consistent trend between question content overlap and response diversity prevailed. Nevertheless, combining consistent task structure with crowd-level priming procedures - to encourage diversity in inter-task effects across the crowd - offers an exciting path for future study

    It's getting crowded! : improving the effectiveness of microtask crowdsourcing

    Get PDF
    [no abstract

    A Capability Requirements Approach for Predicting Worker Performance in Crowdsourcing

    Get PDF
    Abstract—Assigning heterogeneous tasks to workers is an important challenge of crowdsourcing platforms. Current ap-proaches to task assignment have primarily focused on content-based approaches, qualifications, or work history. We propose an alternative and complementary approach that focuses on what capabilities workers employ to perform tasks. First, we model various tasks according to the human capabilities required to perform them. Second, we capture the capability traces of the crowd workers performance on existing tasks. Third, we predict performance of workers on new tasks to make task routing decisions, with the help of capability traces. We evaluate the ef-fectiveness of our approach on three different tasks including fact verification, image comparison, and information extraction. The results demonstrate that we can predict worker’s performance based on worker capabilities. We also highlight limitations and extensions of the proposed approach. Keywords—microtask, taxonomy, crowdsourcing, performance I

    In What Mood Are You Today?

    Get PDF
    The mood of individuals in the workplace has been well-studied due to its influence on task performance, and work engagement. However, the effect of mood has not been studied in detail in the context of microtask crowdsourcing. In this paper, we investigate the influence of one's mood, a fundamental psychosomatic dimension of a worker's behaviour, on their interaction with tasks, task performance and perceived engagement. To this end, we conducted two comprehensive studies; (i) a survey exploring the perception of crowd workers regarding the role of mood in shaping their work, and (ii) an experimental study to measure and analyze the actual impact of workers' moods in information findings microtasks. We found evidence of the impact of mood on a worker's perceived engagement through the feeling of reward or accomplishment, and we argue as to why the same impact is not perceived in the evaluation of task performance. Our findings have broad implications on the design and workflow of crowdsourcing systems

    Novel Methods for Designing Tasks in Crowdsourcing

    Get PDF
    Crowdsourcing is becoming more popular as a means for scalable data processing that requires human intelligence. The involvement of groups of people to accomplish tasks could be an effective success factor for data-driven businesses. Unlike in other technical systems, the quality of the results depends on human factors and how well crowd workers understand the requirements of the task, to produce high-quality results. Looking at previous studies in this area, we found that one of the main factors that affect workers’ performance is the design of the crowdsourcing tasks. Previous studies of crowdsourcing task design covered a limited set of factors. The main contribution of this research is the focus on some of the less-studied technical factors, such as examining the effect of task ordering and class balance and measuring the consistency of the same task design over time and on different crowdsourcing platforms. Furthermore, this study ambitiously extends work towards understanding workers’ point of view in terms of the quality of the task and the payment aspect by performing a qualitative study with crowd workers and shedding light on some of the ethical issues around payments for crowdsourcing tasks. To achieve our goal, we performed several crowdsourcing experiments on specific platforms and measured the factors that influenced the quality of the overall result

    Systems for Managing Work-Related Transitions

    Get PDF
    Peoples' work lives have become ever-populated with transitions across tasks, devices, and environments. Despite their ubiquitous nature, managing transitions across these three domains has remained a significant challenge. Current systems and interfaces for managing transitions have explored approaches that allow users to track work-related information or automatically capture or infer context, but do little to support user autonomy at its fullest. In this dissertation, we present three studies that support the goal of designing and understanding systems for managing work-related transitions. Our inquiry is motivated by the notion that people lack the ability to continue or discontinue their work at the level they wish to do so. We scope our research to information work settings, and we use our three studies to generate novel insights about how empowering peoples' ability to engage with their work can mitigate the challenges of managing work-related transitions. We first introduce and study Mercury, a system that mitigates programmers' challenges in transitioning across devices and environments by enabling their ability to continue work on-the-go. Mercury orchestrates programmers' work practices by providing them with a series of auto-generated microtasks on their mobile device based on the current state of their source code. Tasks in Mercury are designed so that they can be completed quickly without the need for additional context, making them suitable to address during brief moments of downtime. When users complete microtasks on-the-go, Mercury calculates file changes and integrates them into the user's codebase to support task resumption. We then introduce SwitchBot, a conversational system that mitigates the challenges in discontinuing work during the transition between home and the workplace. SwitchBot's design philosophy is centered on assisting information workers in detaching from and reattaching with their work through brief conversations before the start and end of the workday. By design, SwitchBot's detachment and reattachment dialogues inquire about users' task-related goals or user's emotion-related goals. We evaluated SwitchBot with an emphasis on understanding how the system and its two dialogues uniquely affected information workers' ability to detach from and later reattach with their work. Following our study of Mercury and SwitchBot, we present findings from an interview study with crowdworkers aimed at understanding the work-related transitions they experience in their work practice from the perspective of tools. We characterize the tooling observed in crowdworkers' work practices and identified three types of "fragmentation" that are motivated by tooling in the practice. Our study highlights several distinctions between traditional and contemporary information work settings and lays a foundation for future systems that aid next-generation information workers in managing work-related transitions. We conclude by outlining this dissertation's contributions and future research directions

    Designing for quality in real-world mobile crowdsourcing systems

    Get PDF
    PhD ThesisCrowdsourcing has emerged as a popular means to collect and analyse data on a scale for problems that require human intelligence to resolve. Its prompt response and low cost have made it attractive to businesses and academic institutions. In response, various online crowdsourcing platforms, such as Amazon MTurk, Figure Eight and Prolific have successfully emerged to facilitate the entire crowdsourcing process. However, the quality of results has been a major concern in crowdsourcing literature. Previous work has identified various key factors that contribute to issues of quality and need to be addressed in order to produce high quality results. Crowd tasks design, in particular, is a major key factor that impacts the efficiency and effectiveness of crowd workers as well as the entire crowdsourcing process. This research investigates crowdsourcing task designs to collect and analyse two distinct types of data, and examines the value of creating high-quality crowdwork activities on new crowdsource enabled systems for end-users. The main contribution of this research includes 1) a set of guidelines for designing crowdsourcing tasks that support quality collection, analysis and translation of speech and eye tracking data in real-world scenarios; and 2) Crowdsourcing applications that capture real-world data and coordinate the entire crowdsourcing process to analyse and feed quality results back. Furthermore, this research proposes a new quality control method based on workers trust and self-verification. To achieve this, the research follows the case study approach with a focus on two real-world data collection and analysis case studies. The first case study, Speeching, explores real-world speech data collection, analysis, and feedback for people with speech disorder, particularly with Parkinson’s. The second case study, CrowdEyes, examines the development and use of a hybrid system combined of crowdsourcing and low-cost DIY mobile eye trackers for real-world visual data collection, analysis, and feedback. Both case studies have established the capability of crowdsourcing to obtain high quality responses comparable to that of an expert. The Speeching app, and the provision of feedback in particular were well perceived by the participants. This opens up new opportunities in digital health and wellbeing. Besides, the proposed crowd-powered eye tracker is fully functional under real-world settings. The results showed how this approach outperforms all current state-of-the-art algorithms under all conditions, which opens up the technology for wide variety of eye tracking applications in real-world settings

    Efficient crowdsourcing of unknown experts using bounded multi-armed bandits

    Full text link
    corecore