5 research outputs found

    How Crowd Worker Factors Influence Subjective Annotations: A Study of Tagging Misogynistic Hate Speech in Tweets

    Full text link
    Crowdsourced annotation is vital to both collecting labelled data to train and test automated content moderation systems and to support human-in-the-loop review of system decisions. However, annotation tasks such as judging hate speech are subjective and thus highly sensitive to biases stemming from annotator beliefs, characteristics and demographics. We conduct two crowdsourcing studies on Mechanical Turk to examine annotator bias in labelling sexist and misogynistic hate speech. Results from 109 annotators show that annotator political inclination, moral integrity, personality traits, and sexist attitudes significantly impact annotation accuracy and the tendency to tag content as hate speech. In addition, semi-structured interviews with nine crowd workers provide further insights regarding the influence of subjectivity on annotations. In exploring how workers interpret a task - shaped by complex negotiations between platform structures, task instructions, subjective motivations, and external contextual factors - we see annotations not only impacted by worker factors but also simultaneously shaped by the structures under which they labour.Comment: Accepted to the 11th AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2023

    Effect of conformity on perceived trustworthiness of news in social media

    No full text
    Abstract A catalyst for the spread of fake news is the existence of comments that users make in support of, or against, such articles. In this article, we investigate whether critical and supportive comments can induce conformity in how readers perceive trustworthiness of news articles and respond to them. We find that individuals tend to conform to the majority’s opinion of an article’s trustworthiness (58%), especially when challenged by larger majorities who are critical of the article’s credibility, or when less confident about their personal judgment. Moreover, we find that individuals who conform are more inclined to take action: to report articles they perceive as fake, and to comment on and share articles they perceive as real. We conclude with a discussion on the implications of our findings for mitigating the dispersion of fake news on social media

    How Crowd Worker Factors Influence Subjective Annotations: A Study of Tagging Misogynistic Hate Speech in Tweets

    No full text
    Crowdsourced annotation is vital to both collecting labelled data to train and test automated content moderation systems and to support human-in-the-loop review of system decisions. However, annotation tasks such as judging hate speech are subjective and thus highly sensitive to biases stemming from annotator beliefs, characteristics and demographics. We conduct two crowdsourcing studies on Mechanical Turk to examine annotator bias in labelling sexist and misogynistic hate speech. Results from 109 annotators show that annotator political inclination, moral integrity, personality traits, and sexist attitudes significantly impact annotation accuracy and the tendency to tag content as hate speech. In addition, semi-structured interviews with nine crowd workers provide further insights regarding the influence of subjectivity on annotations. In exploring how workers interpret a task — shaped by complex negotiations between platform structures, task instructions, subjective motivations, and external contextual factors — we see annotations not only impacted by worker factors but also simultaneously shaped by the structures under which they labour

    Investigating human scale spatial experience

    No full text
    Abstract Spatial experience, or how humans experience a given space, has been a pivotal topic especially in urban-scale environments. On the human scale, HCI researchers have mostly investigated personal meanings or aesthetic and embodied experiences. In this paper, we investigate the human scale as an ensemble of individual spatial features. Through large-scale online questionnaires we first collected a rich set of spatial features that people generally use to characterize their surroundings. Second, we conducted a set of field interviews to develop a more nuanced understanding of the feature identified as most important: perceived safety. Our combined quantitative and qualitative analysis contributes to spatial understanding as a form of context information and presents a timely investigation into the perceived safety of human scale spaces. By connecting our results to the broader scientific literature, we contribute to the field of HCI spatial understanding
    corecore