8 research outputs found

    Anyone Can Become a Troll: Causes of Trolling Behavior in Online Discussions

    Full text link
    In online communities, antisocial behavior such as trolling disrupts constructive discussion. While prior work suggests that trolling behavior is confined to a vocal and antisocial minority, we demonstrate that ordinary people can engage in such behavior as well. We propose two primary trigger mechanisms: the individual's mood, and the surrounding context of a discussion (e.g., exposure to prior trolling behavior). Through an experiment simulating an online discussion, we find that both negative mood and seeing troll posts by others significantly increases the probability of a user trolling, and together double this probability. To support and extend these results, we study how these same mechanisms play out in the wild via a data-driven, longitudinal analysis of a large online news discussion community. This analysis reveals temporal mood effects, and explores long range patterns of repeated exposure to trolling. A predictive model of trolling behavior shows that mood and discussion context together can explain trolling behavior better than an individual's history of trolling. These results combine to suggest that ordinary people can, under the right circumstances, behave like trolls.Comment: Best Paper Award at CSCW 201

    User Engagement and the Toxicity of Tweets

    Full text link
    Twitter is one of the most popular online micro-blogging and social networking platforms. This platform allows individuals to freely express opinions and interact with others regardless of geographic barriers. However, with the good that online platforms offer, also comes the bad. Twitter and other social networking platforms have created new spaces for incivility. With the growing interest on the consequences of uncivil behavior online, understanding how a toxic comment impacts online interactions is imperative. We analyze a random sample of more than 85,300 Twitter conversations to examine differences between toxic and non-toxic conversations and the relationship between toxicity and user engagement. We find that toxic conversations, those with at least one toxic tweet, are longer but have fewer individual users contributing to the dialogue compared to the non-toxic conversations. However, within toxic conversations, toxicity is positively associated with more individual Twitter users participating in conversations. This suggests that overall, more visible conversations are more likely to include toxic replies. Additionally, we examine the sequencing of toxic tweets and its impact on conversations. Toxic tweets often occur as the main tweet or as the first reply, and lead to greater overall conversation toxicity. We also find a relationship between the toxicity of the first reply to a toxic tweet and the toxicity of the conversation, such that whether the first reply is toxic or non-toxic sets the stage for the overall toxicity of the conversation, following the idea that hate can beget hate

    A Systematic Literature Review on Cyberbullying in Social Media: Taxonomy, Detection Approaches, Datasets, And Future Research Directions

    Get PDF
    In the area of Natural Language Processing, sentiment analysis, also called opinion mining, aims to extract human thoughts, beliefs, and perceptions from unstructured texts. In the light of social media's rapid growth and the influx of individual comments, reviews and feedback, it has evolved as an attractive, challenging research area. It is one of the most common problems in social media to find toxic textual content.  Anonymity and concealment of identity are common on the Internet for people coming from a wide range of diversity of cultures and beliefs. Having freedom of speech, anonymity, and inadequate social media regulations make cyber toxic environment and cyberbullying significant issues, which require a system of automatic detection and prevention. As far as this is concerned, diverse research is taking place based on different approaches and languages, but a comprehensive analysis to examine them from all angles is lacking. This systematic literature review is therefore conducted with the aim of surveying the research and studies done to date on classification of  cyberbullying based in textual modality by the research community. It states the definition, , taxonomy, properties, outcome of cyberbullying, roles in cyberbullying  along with other forms of bullying and different offensive behavior in social media. This article also shows the latest popular benchmark datasets on cyberbullying, along with their number of classes (Binary/Multiple), reviewing the state-of-the-art methods to detect cyberbullying and abusive content on social media and discuss the factors that drive offenders to indulge in offensive activity, preventive actions to avoid online toxicity, and various cyber laws in different countries. Finally, we identify and discuss the challenges, solutions, additionally future research directions that serve as a reference to overcome cyberbullying in social media

    How people perceive malicious comments differently: factors influencing the perception of maliciousness in online news comments

    Get PDF
    This study proposes a comprehensive model to investigate the factors that influence the perceived maliciousness of online news comments. The study specifically examines individual factors, including demographic characteristics (e.g., gender and age), personality traits (e.g., empathy and attitudes toward online news comments), and reading-related factors (e.g., the amount of news comment reading). Contextual factors such as issue involvement, perceived peer behavior, and the presence of malicious comments in news articles are also considered. The results suggest that most of the proposed variables have a significant impact on the perceived maliciousness of online news comments, except for morality and issue involvement. The findings have important theoretical implications for research on malicious online news comments and provide practical guidelines for online news platforms on how to reduce malicious comments by visualizing them alongside other news comments
    corecore