7,899 research outputs found

    Competing or aiming to be average?: Normification as a means of engaging digital volunteers

    Get PDF
    Engagement, motivation and active contribution by digital volunteers are key requirements for crowdsourcing and citizen science projects. Many systems use competitive elements, for example point scoring and leaderboards, to achieve these ends. However, while competition may motivate some people, it can have a neutral or demotivating effect on others. In this paper we explore theories of personal and social norms and investigate normification as an alternative approach to engagement, to be used alongside or instead of competitive strategies. We provide a systematic review of existing crowdsourcing and citizen science literature and categorise the ways that theories of norms have been incorporated to date. We then present qualitative interview data from a pro-environmental crowdsourcing study, Close the Door, which reveals normalising attitudes in certain participants. We assess how this links with competitive behaviour and participant performance. Based on our findings and analysis of norm theories, we consider the implications for designers wishing to use normification as an engagement strategy in crowdsourcing and citizen science systems

    Crowdsourcing Without a Crowd: Reliable Online Species Identification Using Bayesian Models to Minimize Crowd Size

    Get PDF
    We present an incremental Bayesian model that resolves key issues of crowd size and data quality for consensus labeling. We evaluate our method using data collected from a real-world citizen science program, BeeWatch, which invites members of the public in the United Kingdom to classify (label) photographs of bumblebees as one of 22 possible species. The biological recording domain poses two key and hitherto unaddressed challenges for consensus models of crowdsourcing: (1) the large number of potential species makes classification difficult, and (2) this is compounded by limited crowd availability, stemming from both the inherent difficulty of the task and the lack of relevant skills among the general public. We demonstrate that consensus labels can be reliably found in such circumstances with very small crowd sizes of around three to five users (i.e., through group sourcing). Our incremental Bayesian model, which minimizes crowd size by re-evaluating the quality of the consensus label following each species identification solicited from the crowd, is competitive with a Bayesian approach that uses a larger but fixed crowd size and outperforms majority voting. These results have important ecological applicability: biological recording programs such as BeeWatch can sustain themselves when resources such as taxonomic experts to confirm identifications by photo submitters are scarce (as is typically the case), and feedback can be provided to submitters in a timely fashion. More generally, our model provides benefits to any crowdsourced consensus labeling task where there is a cost (financial or otherwise) associated with soliciting a label

    Gravity Spy: Integrating Advanced LIGO Detector Characterization, Machine Learning, and Citizen Science

    Get PDF
    (abridged for arXiv) With the first direct detection of gravitational waves, the Advanced Laser Interferometer Gravitational-wave Observatory (LIGO) has initiated a new field of astronomy by providing an alternate means of sensing the universe. The extreme sensitivity required to make such detections is achieved through exquisite isolation of all sensitive components of LIGO from non-gravitational-wave disturbances. Nonetheless, LIGO is still susceptible to a variety of instrumental and environmental sources of noise that contaminate the data. Of particular concern are noise features known as glitches, which are transient and non-Gaussian in their nature, and occur at a high enough rate so that accidental coincidence between the two LIGO detectors is non-negligible. In this paper we describe an innovative project that combines crowdsourcing with machine learning to aid in the challenging task of categorizing all of the glitches recorded by the LIGO detectors. Through the Zooniverse platform, we engage and recruit volunteers from the public to categorize images of glitches into pre-identified morphological classes and to discover new classes that appear as the detectors evolve. In addition, machine learning algorithms are used to categorize images after being trained on human-classified examples of the morphological classes. Leveraging the strengths of both classification methods, we create a combined method with the aim of improving the efficiency and accuracy of each individual classifier. The resulting classification and characterization should help LIGO scientists to identify causes of glitches and subsequently eliminate them from the data or the detector entirely, thereby improving the rate and accuracy of gravitational-wave observations. We demonstrate these methods using a small subset of data from LIGO's first observing run.Comment: 27 pages, 8 figures, 1 tabl

    A Review on the Applications of Crowdsourcing in Human Pathology

    Full text link
    The advent of the digital pathology has introduced new avenues of diagnostic medicine. Among them, crowdsourcing has attracted researchers' attention in the recent years, allowing them to engage thousands of untrained individuals in research and diagnosis. While there exist several articles in this regard, prior works have not collectively documented them. We, therefore, aim to review the applications of crowdsourcing in human pathology in a semi-systematic manner. We firstly, introduce a novel method to do a systematic search of the literature. Utilizing this method, we, then, collect hundreds of articles and screen them against a pre-defined set of criteria. Furthermore, we crowdsource part of the screening process, to examine another potential application of crowdsourcing. Finally, we review the selected articles and characterize the prior uses of crowdsourcing in pathology
    • …
    corecore