286 research outputs found

    Trajectories of Blocked Community Members: Redemption, Recidivism and Departure

    Full text link
    Community norm violations can impair constructive communication and collaboration online. As a defense mechanism, community moderators often address such transgressions by temporarily blocking the perpetrator. Such actions, however, come with the cost of potentially alienating community members. Given this tradeoff, it is essential to understand to what extent, and in which situations, this common moderation practice is effective in reinforcing community rules. In this work, we introduce a computational framework for studying the future behavior of blocked users on Wikipedia. After their block expires, they can take several distinct paths: they can reform and adhere to the rules, but they can also recidivate, or straight-out abandon the community. We reveal that these trajectories are tied to factors rooted both in the characteristics of the blocked individual and in whether they perceived the block to be fair and justified. Based on these insights, we formulate a series of prediction tasks aiming to determine which of these paths a user is likely to take after being blocked for their first offense, and demonstrate the feasibility of these new tasks. Overall, this work builds towards a more nuanced approach to moderation by highlighting the tradeoffs that are in play.Comment: To appear in Proceedings of the 2019 World Wide Web Conference (WWW '19), May 13-17, 2019, San Francisco, CA, USA. Code and data available as part of ConvoKit: convokit.cornell.ed

    Toxicité et sentiment : comment l'étude des sentiments peut aider la détection de toxicité

    Get PDF
    La détection automatique de contenu toxique en ligne est un sujet très important aujourd’hui. Les modérateurs ne peuvent filtrer manuellement tous les messages et les utilisateurs trouvent constamment de nouvelles façons de contourner les filtres automatiques. Dans ce mémoire, j’explore l’impact que peut avoir la détection de sentiment pour améliorer trois points importants de la détection automatique de toxicité : détecter le contenu toxique de façon plus exacte ; rendre les filtres plus difficiles à déjouer et prédire les conversations les plus à risque. Les deux premiers points sont étudiés dans un premier article, où l’intuition principale est qu’il est plus difficile pour un utilisateur malveillant de dissimuler le sentiment d’un message que certains mots-clés à risque. Pour tester cette hypothèse, un outil de détection de sentiment est construit, puis il est utilisé pour mesurer la corrélation entre sentiment et toxicité. Par la suite, les résultats de cet outil sont utilisés comme caractéristiques pour entraîner un modèle de détection de toxicité, et le modèle est testé à la fois dans un contexte classique et un contexte où on simule des altérations aux messages faites par un utilisateur tentant de déjouer un filtre de toxicité. La conclusion de ces tests est que les informations de sentiment aident à la détection de toxicité, particulièrement dans un contexte où les messages sont modifiés. Le troisième point est le sujet d’un second article, qui a comme objectif de valider si les sentiments des premiers messages d’une conversation permettent de prédire si elle va dérailler. Le même outil de détection de sentiments est utilisé, en combinaison avec d’autres caractéristiques trouvées dans de précédents travaux dans le domaine. La conclusion est que les sentiments permettent d’améliorer cette tâche également.Automatic toxicity detection of online content is a major research field nowadays. Moderators cannot filter manually all the messages that are posted everyday and users constantly find new ways to circumvent classic filters. In this master’s thesis, I explore the benefits of sentiment detection for three majors challenges of automatic toxicity detection: standard toxicity detection, making filters harder to circumvent, and predicting conversations at high risk of becoming toxic. The two first challenges are studied in the first article. Our main intuition is that it is harder for a malicious user to hide the toxic sentiment of their message than to change a few toxic keywords. To test this hypothesis, a sentiment detection tool is built and used to measure the correlation between sentiment and toxicity. Next, the sentiment is used as features to train a toxicity detection model, and the model is tested in both a classic and a subversive context. The conclusion of those tests is that sentiment information helps toxicity detection, especially when using subversion. The third challenge is the subject of our second paper. The objective of that paper is to validate if the sentiments of the first messages of a conversation can help predict if it will derail into toxicity. The same sentiment detection tool is used, in addition to other features developed in previous related works. Our results show that sentiment does help improve that task as well

    Don't Let Me Be Misunderstood: Comparing Intentions and Perceptions in Online Discussions

    Full text link
    Discourse involves two perspectives: a person's intention in making an utterance and others' perception of that utterance. The misalignment between these perspectives can lead to undesirable outcomes, such as misunderstandings, low productivity and even overt strife. In this work, we present a computational framework for exploring and comparing both perspectives in online public discussions. We combine logged data about public comments on Facebook with a survey of over 16,000 people about their intentions in writing these comments or about their perceptions of comments that others had written. Unlike previous studies of online discussions that have largely relied on third-party labels to quantify properties such as sentiment and subjectivity, our approach also directly captures what the speakers actually intended when writing their comments. In particular, our analysis focuses on judgments of whether a comment is stating a fact or an opinion, since these concepts were shown to be often confused. We show that intentions and perceptions diverge in consequential ways. People are more likely to perceive opinions than to intend them, and linguistic cues that signal how an utterance is intended can differ from those that signal how it will be perceived. Further, this misalignment between intentions and perceptions can be linked to the future health of a conversation: when a comment whose author intended to share a fact is misperceived as sharing an opinion, the subsequent conversation is more likely to derail into uncivil behavior than when the comment is perceived as intended. Altogether, these findings may inform the design of discussion platforms that better promote positive interactions.Comment: Proceedings of The Web Conference (WWW) 202

    The Bullying Game: Sexism Based Toxic Language Analysis on Online Games Chat Logs by Text Mining

    Get PDF
    As a unique type of social network, the online gaming industry is a fast-growing, changing, and men-dominated field which attracts diverse backgrounds. Being dominated by male users, game developers, game players, game investors, the non-inclusiveness and gender inequality reside as salient problems in the community. In the online gaming communities, most women players report toxic and offensive language or experiences of verbal abuse. Symbolic interactionists and feminists assume that words matter since the use of particular language and terms can dehumanize and harm particular groups such as women. Identifying and reporting the toxic behavior, sexism, and harassment that occur in online games is a critical need in preventing cyberbullying, and it will help gender diversity and equality grow in the online gaming industry. However, the research on this topic is still rare, except for some milestone studies. This paper aims to contribute to the theory and practice of sexist toxic language detection in the online gaming community, through the automatic detection and analysis of toxic comments in online games chat logs. We adopted the MaXQDA tool as a data visualization technique to reveal the most frequently used toxic words used against women in online gaming communities. We also applied the Naïve Bayes Classifier for text mining to classify if a chat log content is sexist and toxic. We also refined the text mining model Laplace estimator and re-tested the model’s accuracy. The study also revealed that the accuracy of the Naïve Bayes Classifier did not change by the Laplace estimator. The findings of the study are expected to raise awareness about the use of gender-based toxic language in the online gaming community. Moreover, the proposed mining model can inspire similar research on practical tools to help moderate the use of sexist toxic language and disinfect these communities from gender-based discrimination and sexist bullying
    • …
    corecore