9,376 research outputs found

    Supporting Comment Moderators in Identifying High Quality Online News Comments

    Get PDF
    ABSTRACT Online comments submitted by readers of news articles can provide valuable feedback and critique, personal views and perspectives, and opportunities for discussion. The varying quality of these comments necessitates that publishers remove the low quality ones, but there is also a growing awareness that by identifying and highlighting high quality contributions this can promote the general quality of the community. In this paper we take a user-centered design approach towards developing a system, CommentIQ, which supports comment moderators in interactively identifying high quality comments using a combination of comment analytic scores as well as visualizations and flexible UI components. We evaluated this system with professional comment moderators working at local and national news outlets and provide insights into the utility and appropriateness of features for journalistic tasks, as well as how the system may enable or transform journalistic practices around online comments

    Antisocial Behavior in Online Discussion Communities

    Full text link
    User contributions in the form of posts, comments, and votes are essential to the success of online communities. However, allowing user participation also invites undesirable behavior such as trolling. In this paper, we characterize antisocial behavior in three large online discussion communities by analyzing users who were banned from these communities. We find that such users tend to concentrate their efforts in a small number of threads, are more likely to post irrelevantly, and are more successful at garnering responses from other users. Studying the evolution of these users from the moment they join a community up to when they get banned, we find that not only do they write worse than other users over time, but they also become increasingly less tolerated by the community. Further, we discover that antisocial behavior is exacerbated when community feedback is overly harsh. Our analysis also reveals distinct groups of users with different levels of antisocial behavior that can change over time. We use these insights to identify antisocial users early on, a task of high practical importance to community maintainers.Comment: ICWSM 201

    Anyone Can Become a Troll: Causes of Trolling Behavior in Online Discussions

    Full text link
    In online communities, antisocial behavior such as trolling disrupts constructive discussion. While prior work suggests that trolling behavior is confined to a vocal and antisocial minority, we demonstrate that ordinary people can engage in such behavior as well. We propose two primary trigger mechanisms: the individual's mood, and the surrounding context of a discussion (e.g., exposure to prior trolling behavior). Through an experiment simulating an online discussion, we find that both negative mood and seeing troll posts by others significantly increases the probability of a user trolling, and together double this probability. To support and extend these results, we study how these same mechanisms play out in the wild via a data-driven, longitudinal analysis of a large online news discussion community. This analysis reveals temporal mood effects, and explores long range patterns of repeated exposure to trolling. A predictive model of trolling behavior shows that mood and discussion context together can explain trolling behavior better than an individual's history of trolling. These results combine to suggest that ordinary people can, under the right circumstances, behave like trolls.Comment: Best Paper Award at CSCW 201

    Hybrid moderation in the newsroom: Recommending featured posts to content moderators

    Full text link
    Online news outlets are grappling with the moderation of user-generated content within their comment section. We present a recommender system based on ranking class probabilities to support and empower the moderator in choosing featured posts, a time-consuming task. By combining user and textual content features we obtain an optimal classification F1-score of 0.44 on the test set. Furthermore, we observe an optimum mean NDCG@5 of 0.87 on a large set of validation articles. As an expert evaluation, content moderators assessed the output of a random selection of articles by choosing comments to feature based on the recommendations, which resulted in a NDCG score of 0.83. We conclude that first, adding text features yields the best score and second, while choosing featured content remains somewhat subjective, content moderators found suitable comments in all but one evaluated recommendations. We end the paper by analyzing our best-performing model, a step towards transparency and explainability in hybrid content moderation

    Mapping Moderation: Cultural Intermediation Work and the Field of Journalism in Online Newsrooms

    Get PDF
    This study investigates the work of moderating and managing audience comments in two Australian online news organisations to find how their staff conceive, practice, value, and develop these new intermediary duties. Using a Bourdieusian analytical framework, it examines whether these work roles operate as new forms of cultural intermediation in news production and how they are influenced by ‘the field of journalism’, which comprises journalism’s power relations, norms, logics and history. Using interviews and participant observation, this study comprehensively documents the distinct objectives, tasks and practices of comment moderators and community managers, as well as identifying the people, aspects of social and cultural capital, and organisational systems that have influenced their approaches to the work. The study demonstrates that comment moderation and management work culturally intermediate between the organisation, readers, and commenters as fringe producers, with a focus on communicating the organisation’s vision for comment sections. However, it finds a distinction between the tasks and workplace status of comment moderators and community managers and reveals prioritisation’s importance in shaping discussions’ flow and tenor. The field of journalism significantly influences this work, as workers with journalism experience evaluated comments based on their contribution or adherence to journalistic values. Participants’ field alignment also affected how they moderated comments or managed their community, with most comparing their comment sections and practices to those of other prominent journalistic organisations. These results show the need for more development of cultural intermediation strategies and techniques to enable online news organisations to build constructive commenting communities while communicating their editorial values

    Pathways to Online Hate: Behavioural, Technical, Economic, Legal, Political & Ethical Analysis.

    Get PDF
    The Alfred Landecker Foundation seeks to create a safer digital space for all. The work of the Foundation helps to develop research, convene stakeholders to share valuable insights, and support entities that combat online harms, specifically online hate, extremism, and disinformation. Overall, the Foundation seeks to reduce hate and harm tangibly and measurably in the digital space by using its resources in the most impactful way. It also aims to assist in building an ecosystem that can prevent, minimise, and mitigate online harms while at the same time preserving open societies and healthy democracies. A non-exhaustive literature review was undertaken to explore the main facets of harm and hate speech in the evolving online landscape and to analyse behavioural, technical, economic, legal, political and ethical drivers; key findings are detailed in this report
    • …
    corecore