23,032 research outputs found

    Crowdsourcing Contests: Understanding the Effect of Environment and Organization Specific Factors on Sustained Participation

    Get PDF
    Crowdsourcing has increasingly become a recognized problem-solving mechanism for organizations by outsourcing the problem to an undefined crowd of people. The success of crowdsourcing depends on the sustained participation and quality-submissions of the individuals. Yet, little is known about the environment-specific and organization-specific factors that influence individuals’ continued participation in these contests. We address this research gap, by conducting an empirical study using data from an online crowdsourcing contest platform, Kaggle, which delivers data science and machine learning solutions to its clients. The findings show the statistically significant effects of structural capital, familiarity with organization, and experience with the organization on individuals’ sustained participation in crowdsourcing contests. This research contributes to the literature by identifying the environment-specific and organization-specific factors that influence individuals’ sustained participation in crowdsourcing contests. Moreover, this study offers guidance to organizations that host a crowdsourcing platform to design, implement, and operate successful crowdsourcing contest platforms

    Crowdsourcing Contests: Understanding the Effect of Environment and Organization Specific Factors on Sustained Participation

    Get PDF
    Crowdsourcing has increasingly become a recognized problem-solving mechanism for organizations by outsourcing the problem to an undefined crowd of people. The success of crowdsourcing depends on the sustained participation and quality-submissions of the individuals. Yet, little is known about the environment-specific and organization-specific factors that influence individuals’ continued participation in these contests. We address this research gap, by conducting an empirical study using data from an online crowdsourcing contest platform, Kaggle, which delivers data science and machine learning solutions to its clients. The findings show the statistically significant effects of structural capital, familiarity with organization, and experience with the organization on individuals’ sustained participation in crowdsourcing contests. This research contributes to the literature by identifying the environment-specific and organization-specific factors that influence individuals’ sustained participation in crowdsourcing contests. Moreover, this study offers guidance to organizations that host a crowdsourcing platform to design, implement, and operate successful crowdsourcing contest platforms

    Factors Influencing the Participation of Crowdsourcing Solvers: Benefit or Cost

    Get PDF
    Crowdsourcing has become a new channel for companies and organizations to collect the wisdom of crowds and reach business objectives. How to effectively motivate user participation and improve the quality of solutions has become an important issue that needs to be addressed in crowdsourcing research. While the influence of benefit factors on user participation has been widely tested, understanding of cost factors is still insufficient in extant literature. Based on social exchange theory, this paper proposed a research model to explain the impacts of benefit and cost factors on solver participation behavior as well as the moderating role of task complexity in crowdsourcing. The model will be tested using data from an online translation crowdsourcing task where solvers were invited to participate in the translation and fill out the questionnaire. This paper explores the differences in the factors which affect solvers participation intention and the quality of solutions. In addition, the role of task complexity can be found out by designing translation tasks of different task complexity and randomly assigning solvers to different tasks

    What Sustains Individuals’ Participation in Crowdsourcing Contests?

    Get PDF
    Crowdsourcing contests have become widely adopted for idea generation and problem-solving in various companies in different industries. The success of crowdsourcing depends on the sustained participation and quality-submissions of the individuals. Yet, little is known about the factors that influence individuals’ continued participation in these contests. We address this issue, by conducting an empirical study using data from an online crowdsourcing contest platform, Kaggle, which delivers data science and machine learning solutions and models to its clients. The findings show that the community activities and team activities do not contribute to motivating the continued participation, but tenure does significantly affect the continued participation. We also found statistically significant effects of amount of prize, number of competitions, previous team performance, and competition duration on individuals sustained participation in crowdsourcing contests. This research contributes to the literature by identifying the factors influencing individuals’ sustained participation in crowdsourcing contests

    Creation of Reliable Relevance Judgments in Information Retrieval Systems Evaluation Experimentation through Crowdsourcing: A Review

    Get PDF
    Test collection is used to evaluate the information retrieval systems in laboratory-based evaluation experimentation. In a classic setting, generating relevance judgments involves human assessors and is a costly and time consuming task. Researchers and practitioners are still being challenged in performing reliable and low-cost evaluation of retrieval systems. Crowdsourcing as a novel method of data acquisition is broadly used in many research fields. It has been proven that crowdsourcing is an inexpensive and quick solution as well as a reliable alternative for creating relevance judgments. One of the crowdsourcing applications in IR is to judge relevancy of query document pair. In order to have a successful crowdsourcing experiment, the relevance judgment tasks should be designed precisely to emphasize quality control. This paper is intended to explore different factors that have an influence on the accuracy of relevance judgments accomplished by workers and how to intensify the reliability of judgments in crowdsourcing experiment

    Motivasi Pengguna Dalam Menggunakan Metode Crowdsourcing Pada Pembuatan Perangkat Lunak

    Get PDF
    Perkembangan metode pada pengembangan perangkat lunak telah meningkat pada akhir-akhir ini, dengan meningkatnya teknologi dan kebutuhan pasar, metode crowdsourcing telah berkembang dan mendapat tingkat popularitas yang tinggi dikalangan masyarakat. Metode crowdsourcing lebih condong mengandalkan kekuatan orang banyak sebagai kemampuan utama dalam produksinya. Meskipun begitu, sejak crowdsourcing menjadi kekuatan utama baru dan merambah ke dunia pembuatan perangkat lunak, kualitas pada perangkat lunak menjadi dipertanyakan. Crowdsourcing memiliki perbedaan dengan alur pembuatan perangkat lunak secara tradisional seperti Software Life Development Cycle maupun Waterfall Model, selain itu metode crowdsourcing mengandalkan kekuatan keramaian pada saat pembuatanya. Beberapa studi dan jurnal sebelumnya beranggapan bahwa motivasi merupakan kunci utama kesuksesan ketika metode crowdsourcing digunakan untuk memproduksi sebuah produk. Pada studi ini diajukan model yang dikombinasikan dari dua teori utama untuk menjawab pertanyaan tentang motivasi penggunaan crowdsourcing untuk pembuatan software yaitu teori self-determination, dan IS success model untuk lebih mengerti tentang hubunganya intensitas pengguna dengan kepuasan pada pengguna pada kasus pengembangan perangkat lunak dengan metode crowdsourcing ==================================================================== Software Development has increased emerging new methods in its development, with the advancement of digitalization, technology and global networking, Crowdsourcing has been developed and gaining popularity among the people. Unlike the outsourcing, crowdsourcing is more emphasis on the power of crowds as major power production. This study will discuss crowdsourcing activity that focused on software development. Software engineering is a process which software is written a complex process without compromising the quality of the software. However, since crowdsourcing software engineering relies on its robust method to produce a software and entirely different from traditional software engineering, their quality are questionable. A major issue in of crowdsourcing is how to attract and to sustain for development. Motivation is a matter that should be investigated further by the researchers for better crowdsourcing development to bring right crowds to the table so it can sustain the crowdsourcing activity. This study discusses more a several factors motivation that can be an impact, an influence to the development of crowdsourcing in software development. To improve these study findings, this study also combines two major theories about self-determination and IS Success Model to investigate further about motivation the users joined crowdsourcing on software development and to understand the impact of user satisfaction in case of crowdsourcing on software developmen

    Worker Retention, Response Quality, and Diversity in Microtask Crowdsourcing: An Experimental Investigation of the Potential for Priming Effects to Promote Project Goals

    Get PDF
    Online microtask crowdsourcing platforms act as efficient resources for delegating small units of work, gathering data, generating ideas, and more. Members of research and business communities have incorporated crowdsourcing into problem-solving processes. When human workers contribute to a crowdsourcing task, they are subject to various stimuli as a result of task design. Inter-task priming effects - through which work is nonconsciously, yet significantly, influenced by exposure to certain stimuli - have been shown to affect microtask crowdsourcing responses in a variety of ways. Instead of simply being wary of the potential for priming effects to skew results, task administrators can utilize proven priming procedures in order to promote project goals. In a series of three experiments conducted on Amazon’s Mechanical Turk, we investigated the effects of proposed priming treatments on worker retention, response quality, and response diversity. In our first two experiments, we studied the effect of initial response freedom on sustained worker participation and response quality. We expected that workers who were granted greater levels of freedom in an initial response would be stimulated to complete more work and deliver higher quality work than workers originally constrained in their initial response possibilities. We found no significant relationship between the initial response freedom granted to workers and the amount of optional work they completed. The degree of initial response freedom also did not have a significant impact on subsequent response quality. However, the influence of inter-task effects were evident based on response tendencies for different question types. We found evidence that consistency in task structure may play a stronger role in promoting response quality than proposed priming procedures. In our final experiment, we studied the influence of a group-level priming treatment on response diversity. Instead of varying task structure for different workers, we varied the degree of overlap in question content distributed to different workers in a group. We expected groups of workers that were exposed to more diverse preliminary question sets to offer greater diversity in response to a subsequent question. Although differences in response diversity were revealed, no consistent trend between question content overlap and response diversity prevailed. Nevertheless, combining consistent task structure with crowd-level priming procedures - to encourage diversity in inter-task effects across the crowd - offers an exciting path for future study

    Accurate and budget-efficient text, image, and video analysis systems powered by the crowd

    Full text link
    Crowdsourcing systems empower individuals and companies to outsource labor-intensive tasks that cannot currently be solved by automated methods and are expensive to tackle by domain experts. Crowdsourcing platforms are traditionally used to provide training labels for supervised machine learning algorithms. Crowdsourced tasks are distributed among internet workers who typically have a range of skills and knowledge, differing previous exposure to the task at hand, and biases that may influence their work. This inhomogeneity of the workforce makes the design of accurate and efficient crowdsourcing systems challenging. This dissertation presents solutions to improve existing crowdsourcing systems in terms of accuracy and efficiency. It explores crowdsourcing tasks in two application areas, political discourse and annotation of biomedical and everyday images. The first part of the dissertation investigates how workers' behavioral factors and their unfamiliarity with data can be leveraged by crowdsourcing systems to control quality. Through studies that involve familiar and unfamiliar image content, the thesis demonstrates the benefit of explicitly accounting for a worker's familiarity with the data when designing annotation systems powered by the crowd. The thesis next presents Crowd-O-Meter, a system that automatically predicts the vulnerability of crowd workers to believe \enquote{fake news} in text and video. The second part of the dissertation explores the reversed relationship between machine learning and crowdsourcing by incorporating machine learning techniques for quality control of crowdsourced end products. In particular, it investigates if machine learning can be used to improve the quality of crowdsourced results and also consider budget constraints. The thesis proposes an image analysis system called ICORD that utilizes behavioral cues of the crowd worker, augmented by automated evaluation of image features, to infer the quality of a worker-drawn outline of a cell in a microscope image dynamically. ICORD determines the need to seek additional annotations from other workers in a budget-efficient manner. Next, the thesis proposes a budget-efficient machine learning system that uses fewer workers to analyze easy-to-label data and more workers for data that require extra scrutiny. The system learns a mapping from data features to number of allocated crowd workers for two case studies, sentiment analysis of twitter messages and segmentation of biomedical images. Finally, the thesis uncovers the potential for design of hybrid crowd-algorithm methods by describing an interactive system for cell tracking in time-lapse microscopy videos, based on a prediction model that determines when automated cell tracking algorithms fail and human interaction is needed to ensure accurate tracking

    Competing or aiming to be average?: Normification as a means of engaging digital volunteers

    Get PDF
    Engagement, motivation and active contribution by digital volunteers are key requirements for crowdsourcing and citizen science projects. Many systems use competitive elements, for example point scoring and leaderboards, to achieve these ends. However, while competition may motivate some people, it can have a neutral or demotivating effect on others. In this paper we explore theories of personal and social norms and investigate normification as an alternative approach to engagement, to be used alongside or instead of competitive strategies. We provide a systematic review of existing crowdsourcing and citizen science literature and categorise the ways that theories of norms have been incorporated to date. We then present qualitative interview data from a pro-environmental crowdsourcing study, Close the Door, which reveals normalising attitudes in certain participants. We assess how this links with competitive behaviour and participant performance. Based on our findings and analysis of norm theories, we consider the implications for designers wishing to use normification as an engagement strategy in crowdsourcing and citizen science systems
    • 

    corecore