231 research outputs found

    In What Mood Are You Today?

    Get PDF
    The mood of individuals in the workplace has been well-studied due to its influence on task performance, and work engagement. However, the effect of mood has not been studied in detail in the context of microtask crowdsourcing. In this paper, we investigate the influence of one's mood, a fundamental psychosomatic dimension of a worker's behaviour, on their interaction with tasks, task performance and perceived engagement. To this end, we conducted two comprehensive studies; (i) a survey exploring the perception of crowd workers regarding the role of mood in shaping their work, and (ii) an experimental study to measure and analyze the actual impact of workers' moods in information findings microtasks. We found evidence of the impact of mood on a worker's perceived engagement through the feeling of reward or accomplishment, and we argue as to why the same impact is not perceived in the evaluation of task performance. Our findings have broad implications on the design and workflow of crowdsourcing systems

    Crowdsourcing for translational research: analysis of biomarker expression using cancer microarrays

    Get PDF
    Background: Academic pathology suffers from an acute and growing lack of workforce resource. This especially impacts on translational elements of clinical trials, which can require detailed analysis of thousands of tissue samples. We tested whether crowdsourcing – enlisting help from the public – is a sufficiently accurate method to score such samples. Methods: We developed a novel online interface to train and test lay participants on cancer detection and immunohistochemistry scoring in tissue microarrays. Lay participants initially performed cancer detection on lung cancer images stained for CD8, and we measured how extending a basic tutorial by annotated example images and feedback-based training affected cancer detection accuracy. We then applied this tutorial to additional cancer types and immunohistochemistry markers – bladder/ki67, lung/EGFR, and oesophageal/CD8 – to establish accuracy compared with experts. Using this optimised tutorial, we then tested lay participants’ accuracy on immunohistochemistry scoring of lung/EGFR and bladder/p53 samples. Results: We observed that for cancer detection, annotated example images and feedback-based training both improved accuracy compared with a basic tutorial only. Using this optimised tutorial, we demonstrate highly accurate (>0.90 area under curve) detection of cancer in samples stained with nuclear, cytoplasmic and membrane cell markers. We also observed high Spearman correlations between lay participants and experts for immunohistochemistry scoring (0.91 (0.78, 0.96) and 0.97 (0.91, 0.99) for lung/EGFR and bladder/p53 samples, respectively). Conclusions: These results establish crowdsourcing as a promising method to screen large data sets for biomarkers in cancer pathology research across a range of cancers and immunohistochemical stains

    Diminished Control in Crowdsourcing: An Investigation of Crowdworker Multitasking Behavior

    Get PDF
    Obtaining high-quality data from crowds can be difficult if contributors do not give tasks sufficient attention. Attention checks are often used to mitigate this problem, but, because the roots of inattention are poorly understood, checks often compel attentive contributors to complete unnecessary work. We investigated a potential source of inattentiveness during crowdwork: multitasking. We found that workers switched to other tasks every five minutes, on average. There were indications that increasing switch frequency negatively affected performance. To address this, we tested an intervention that encouraged workers to stay focused on our task after multitasking was detected. We found that our intervention reduced the frequency of task-switching. It also improves on existing attention checks because it does not place additional demands on workers who are already focused. Our approach shows that crowds can help to overcome some of the limitations of laboratory studies by affording access to naturalistic multitasking behavior

    Home is Where the Lab is: A Comparison of Online and Lab Data From a Time-sensitive Study of Interruption

    Get PDF
    While experiments have been run online for some time with positive results, there are still outstanding questions about the kinds of tasks that can be successfully deployed to remotely situated online participants. Some tasks, such as menu selection, have worked well but these do not represent the gamut of tasks that interest HCI researchers. In particular, we wondered whether long-lasting, time-sensitive tasks that require continuous concentration could work successfully online, given the confounding effects that might accompany the online deployment of such a task. We ran an archetypal interruption experiment both online and in the lab to investigate whether studies demonstrating such characteristics might be more vulnerable to a loss of control than the short, time-insensitive studies that are representative of the majority of previous online studies. Statistical comparisons showed no significant differences in performance on a number of dimensions. However, there were issues with data quality that stemmed from participants misunderstanding the task. Our findings suggest that long-lasting experiments using time-sensitive performance measures can be run online but that care must be taken when introducing participants to experimental procedures

    Home is Where the Lab is: A Comparison of Online and Lab Data From a Time-sensitive Study of Interruption

    Full text link

    Shepherding the crowd yields better work

    Full text link

    Crowdy a framework for supporting socio-technical software : ecosystems with stream-based human computation

    Get PDF
    Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2014.Thesis (Master's) -- Bilkent University, 2014.Includes bibliographical references leaves 79-82.The scale of collaboration between people and computers has expanded leading to new era of computation called crowdsourcing. A variety of problems can be solved with this new approach by employing people to complete tasks that cannot be computerized. However, the existing approaches are focused on simplicity and independency of tasks that fall short to solve complex and sophisticated problems. We present Crowdy, a general-purpose and extensible crowdsourcing platform that lets users perform computations to solve complex problems using both computers and human workers. The platform is developed based on the stream-processing paradigm in which operators execute on the continuos stream of data elements. The proposed architecture provides a standard toolkit of operators for computation and configuration support to control and coordinate resources. There is no rigid structure or requirement that could limit the problem-set, which can be solved with the stream-based approach. The streambased human-computation approach is implemented and verified over different scenarios. Results show that sophisticated problems can be easily solved without significant amount of work for implementation. Also possible improvements are discussed and identified that is a promising future work for the existing work.Kalender, Mert EminM.S

    "Sometimes it's like putting the track in front of the rushing train": Having to be 'on call' for work limits the temporal flexibility of crowdworkers

    Get PDF
    Research suggests that the temporal flexibility advertised to crowdworkers by crowdsourcing platforms is limited by both client-imposed constraints (e.g., strict completion times) and crowdworkers' tooling practices (e.g., multitasking). In this paper, we explore an additional contributor to workers' limited temporal flexibility: the design of crowdsourcing platforms, namely requiring crowdworkers to be `on call' for work. We conducted two studies to investigate the impact of having to be `on call' on workers' schedule control and job control. We find that being `on call' impacted: (1) participants' ability to schedule their time and stick to planned work hours, and (2) the pace at which participants worked and took breaks. The results of the two studies suggest that the `on-demand' nature of crowdsourcing platforms can limit workers' temporal flexibility by reducing schedule control and job control. We conclude the paper by discussing the implications of the results for: (a) crowdworkers, (b) crowdsourcing platforms, and (c) the wider platform economy

    Subsequence Based Deep Active Learning for Named Entity Recognition

    Get PDF
    Active Learning (AL) has been successfully applied to Deep Learning in order to drastically reduce the amount of data required to achieve high performance. Previous works have shown that lightweight architectures for Named Entity Recognition (NER) can achieve optimal performance with only 25% of the original training data. However, these methods do not exploit the sequential nature of language and the heterogeneity of uncertainty within each instance, requiring the labelling of whole sentences. Additionally, this standard method requires that the annotator has access to the full sentence when labelling. In this work, we overcome these limitations by allowing the AL algorithm to query subsequences within sentences, and propagate their labels to other sentences. We achieve highly efficient results on OntoNotes 5.0, only requiring 13% of the original training data, and CoNLL 2003, requiring only 27%. This is an improvement of 39% and 37% compared to querying full sentences
    • …
    corecore