186 research outputs found

    Top Comment or Flop Comment? Predicting and Explaining User Engagement in Online News Discussions

    Full text link
    Comment sections below online news articles enjoy growing popularity among readers. However, the overwhelming number of comments makes it infeasible for the average news consumer to read all of them and hinders engaging discussions. Most platforms display comments in chronological order, which neglects that some of them are more relevant to users and are better conversation starters. In this paper, we systematically analyze user engagement in the form of the upvotes and replies that a comment receives. Based on comment texts, we train a model to distinguish comments that have either a high or low chance of receiving many upvotes and replies. Our evaluation on user comments from TheGuardian.com compares recurrent and convolutional neural network models, and a traditional feature-based classifier. Further, we investigate what makes some comments more engaging than others. To this end, we identify engagement triggers and arrange them in a taxonomy. Explanation methods for neural networks reveal which input words have the strongest influence on our model's predictions. In addition, we evaluate on a dataset of product reviews, which exhibit similar properties as user comments, such as featuring upvotes for helpfulness.Comment: Accepted at the International Conference on Web and Social Media (ICWSM 2020); 11 pages; code and data are available at https://hpi.de/naumann/projects/repeatability/text-mining.htm

    Understanding the voluntary moderation practices in live streaming communities

    Get PDF
    Harmful content, such as hate speech, online abuses, harassment, and cyberbullying, proliferates across various online communities. Live streaming as a novel online community provides ways for thousands of users (viewers) to entertain and engage with a broadcaster (streamer) in real-time in the chatroom. While the streamer has the camera on and the screen shared, tens of thousands of viewers are watching and messaging in real-time, resulting in concerns about harassment and cyberbullying. To regulate harmful content—toxic messages in the chatroom, streamers rely on a combination of automated tools and volunteer human moderators (mods) to block users or remove content, which is termed content moderation. Live streaming as a mixed media contains some unique attributes such as synchronicity and authenticity, making real-time content moderation challenging. Given the high interactivity and ephemerality of live text-based communication in the chatroom, mods have to make decisions with time constraints and little instruction, suffering cognitive overload and emotional toll. While much previous work has focused on moderation in asynchronous online communities and social media platforms, very little is known about human moderation in synchronous online communities with live interaction among users in a timely manner. It is necessary to understand mods’ moderation practices in live streaming communities, considering their roles to support community growth. This dissertation centers on volunteer mods in live streaming communities to explore their moderation practices and relationships with streamers and viewers. Through quantitative and qualitative methods, this dissertation mainly focuses on three aspects: the strategies and tools used by moderators, the mental model and decision-making process applied toward violators, and the conflict management present in the moderation team. This dissertation uses various socio-technical theories to explain mods’ individual and collaborative practices and suggests several design interventions to facilitate the moderation process in live streaming communities

    Extreme Digital Speech:Contexts, Responses, and Solutions

    Get PDF

    Extreme Digital Speech:Contexts, Responses, and Solutions

    Get PDF

    Extreme Digital Speech:Contexts, Responses, and Solutions

    Get PDF
    • …
    corecore