4 research outputs found

    Characterising and Mitigating Aggregation-Bias in Crowdsourced Toxicity Annotations

    No full text
    Training machine learning (ML) models for natural language processing usually requires large amount of data, often acquired through crowdsourcing. The way this data is collected and aggregated can have an effect on the outputs of the trained model such as ignoring the labels which differ from the majority. In this paper we investigate how label aggregation can bias the ML results towards certain data samples and propose a methodology to highlight and mitigate this bias. Although our work is applicable to any kind of label aggregation for data subject to multiple interpretations, we focus on the effects of the bias introduced by majority voting on toxicity prediction over sentences. Our preliminary results point out that we can mitigate the majority-bias and get increased prediction accuracy for the minority opinions if we take into account the different labels from annotators when training adapted models, rather than rely on the aggregated labels.Accepted Author ManuscriptWeb Information System

    CaptureBias: Supporting Media Scholars with Ambiguity-Aware Bias Representation for News Videos

    No full text
    In this project we explore the presence of ambiguity in textual and visual media and its influence on accurately understanding andcapturing bias in news. We study this topic in the context of supportingmedia scholars and social scientists in their media analysis. Our focuslies on racial and gender bias as well as framing and the comparisonof their manifestation across modalities, cultures and languages. In thispaper we lay out a human in the loop approach to investigate the role ofambiguity in detection and interpretation of bias.Accepted Author ManuscriptWeb Information System

    A Survey of Crowdsourcing in Medical Image Analysis

    Get PDF
    Rapid advances in image processing capabilities have been seen across many domains, fostered by the application of machine learning algorithms to "big-data". However, within the realm of medical image analysis, advances have been curtailed, in part, due to the limited availability of large-scale, well-annotated datasets. One of the main reasons for this is the high cost often associated with producing large amounts of high-quality meta-data. Recently, there has been growing interest in the application of crowdsourcing for this purpose; a technique that has proven effective for creating large-scale datasets across a range of disciplines, from computer vision to astrophysics. Despite the growing popularity of this approach, there has not yet been a comprehensive literature review to provide guidance to researchers considering using crowdsourcing methodologies in their own medical imaging analysis. In this survey, we review studies applying crowdsourcing to the analysis of medical images, published prior to July 2018. We identify common approaches, challenges and considerations, providing guidance of utility to researchers adopting this approach. Finally, we discuss future opportunities for development within this emerging domain.Web Information System
    corecore