281 research outputs found

    Challenges in Modifying Existing Scales for Detecting Harassment in Individual Tweets

    Get PDF
    In an effort to create new sociotechnical tools to combat online harassment, we developed a scale to detect and measure verbal violence within individual tweets. Unfortunately, we found that the scale, based on scales effective at detecting harassment offline, was unreliable for tweets. Here, we begin with information about the development and validation of our scale, then discuss the scale’s shortcomings for detecting harassment in tweets, and explore what we can learn from this scale’s failures. We explore how rarity, context, and individual coder’s differences create challenges for detecting verbal violence in individual tweets. We also examine differences in on- and offline harassment that limit the utility of existing harassment measures for online contexts. We close with a discussion of potential avenues for future work in automated harassment detection

    Hate is not Binary: Studying Abusive Behavior of #GamerGate on Twitter

    Get PDF
    Over the past few years, online bullying and aggression have become increasingly prominent, and manifested in many different forms on social media. However, there is little work analyzing the characteristics of abusive users and what distinguishes them from typical social media users. In this paper, we start addressing this gap by analyzing tweets containing a great large amount of abusiveness. We focus on a Twitter dataset revolving around the Gamergate controversy, which led to many incidents of cyberbullying and cyberaggression on various gaming and social media platforms. We study the properties of the users tweeting about Gamergate, the content they post, and the differences in their behavior compared to typical Twitter users. We find that while their tweets are often seemingly about aggressive and hateful subjects, "Gamergaters" do not exhibit common expressions of online anger, and in fact primarily differ from typical users in that their tweets are less joyful. They are also more engaged than typical Twitter users, which is an indication as to how and why this controversy is still ongoing. Surprisingly, we find that Gamergaters are less likely to be suspended by Twitter, thus we analyze their properties to identify differences from typical users and what may have led to their suspension. We perform an unsupervised machine learning analysis to detect clusters of users who, though currently active, could be considered for suspension since they exhibit similar behaviors with suspended users. Finally, we confirm the usefulness of our analyzed features by emulating the Twitter suspension mechanism with a supervised learning method, achieving very good precision and recall.Comment: In 28th ACM Conference on Hypertext and Social Media (ACM HyperText 2017

    Measuring #GamerGate: A Tale of Hate, Sexism, and Bullying

    Get PDF
    Over the past few years, online aggression and abusive behaviors have occurred in many different forms and on a variety of platforms. In extreme cases, these incidents have evolved into hate, discrimination, and bullying, and even materialized into real-world threats and attacks against individuals or groups. In this paper, we study the Gamergate controversy. Started in August 2014 in the online gaming world, it quickly spread across various social networking platforms, ultimately leading to many incidents of cyberbullying and cyberaggression. We focus on Twitter, presenting a measurement study of a dataset of 340k unique users and 1.6M tweets to study the properties of these users, the content they post, and how they differ from random Twitter users. We find that users involved in this "Twitter war" tend to have more friends and followers, are generally more engaged and post tweets with negative sentiment, less joy, and more hate than random users. We also perform preliminary measurements on how the Twitter suspension mechanism deals with such abusive behaviors. While we focus on Gamergate, our methodology to collect and analyze tweets related to aggressive and bullying activities is of independent interest

    A review on deep-learning-based cyberbullying detection

    Get PDF
    Bullying is described as an undesirable behavior by others that harms an individual physically, mentally, or socially. Cyberbullying is a virtual form (e.g., textual or image) of bullying or harassment, also known as online bullying. Cyberbullying detection is a pressing need in today’s world, as the prevalence of cyberbullying is continually growing, resulting in mental health issues. Conventional machine learning models were previously used to identify cyberbullying. However, current research demonstrates that deep learning surpasses traditional machine learning algorithms in identifying cyberbullying for several reasons, including handling extensive data, efficiently classifying text and images, extracting features automatically through hidden layers, and many others. This paper reviews the existing surveys and identifies the gaps in those studies. We also present a deep-learning-based defense ecosystem for cyberbullying detection, including data representation techniques and different deep-learning-based models and frameworks. We have critically analyzed the existing DL-based cyberbullying detection techniques and identified their significant contributions and the future research directions they have presented. We have also summarized the datasets being used, including the DL architecture being used and the tasks that are accomplished for each dataset. Finally, several challenges faced by the existing researchers and the open issues to be addressed in the future have been presented

    Detecting cyberbullying and cyberaggression in social media

    Get PDF
    Cyberbullying and cyberaggression are increasingly worrisome phenomena affecting people across all demographics. More than half of young social media users worldwide have been exposed to such prolonged and/or coordinated digital harassment. Victims can experience a wide range of emotions, with negative consequences such as embarrassment, depression, isolation from other community members, which embed the risk to lead to even more critical consequences, such as suicide attempts. In this work, we take the first concrete steps to understand the characteristics of abusive behavior in Twitter, one of today’s largest social media platforms. We analyze 1.2 million users and 2.1 million tweets, comparing users participating in discussions around seemingly normal topics like the NBA, to those more likely to be hate-related, such as the Gamergate controversy, or the gender pay inequality at the BBC station. We also explore specific manifestations of abusive behavior, i.e., cyberbullying and cyberaggression, in one of the hate-related communities (Gamergate). We present a robust methodology to distinguish bullies and aggressors from normal Twitter users by considering text, user, and network-based attributes. Using various state-of-the-art machine-learning algorithms, we classify these accounts with over 90% accuracy and AUC. Finally, we discuss the current status of Twitter user accounts marked as abusive by our methodology and study the performance of potential mechanisms that can be used by Twitter to suspend users in the future

    Social Bot in Social Media: Detections and Impacts of Social Bot on Twitter Users

    Get PDF
    A social bot is a computer algorithm that automatically produces content and interacts with humans on social media, trying to emulate and possibly alter their behavior. Social bots have inhabited social media platforms for the past few years. Although the initial intention of social bot might be benign, existence of social bot can also bring negative implication to society. For example, in the aftermath of Boston marathon bombing, a lot of tweets has been retweeted without people verifying its accuracy. Therefore, social bot might have the tendency to spread fake news and incite chaos in public. For example, after the Parkland, Florida school shooting, Russian propaganda bots are trying to seize on divisive issues online to sow discord in the United States.This study describes a questionnaire survey of Twitter users about their Twitter usage, ways to detect social bots on Twitter, sentiments towards social bots, as well as how the users protect themselves against harmful social bots. The survey also uses an experimental approach where participants upload a screenshot of a social bot. The result of the survey shows that Twitter bots bring more harms than benefits to Twitter users. However, the advancement of social bots has been so great that it has been hard for human to identify real Twitter users from fake Twitter users. That’s why it is very important for the computing community to engage in finding advanced methods to automatically detect social bots, or to discriminate between humans and bots. Until that process can be fully automated, we need to continue educating more Twitter users about ways to protect themselves against harmful social bots.Master of Science in Information Scienc

    Cyber Places, Crime Patterns, and Cybercrime Prevention: An Environmental Criminology and Crime Analysis approach through Data Science

    Get PDF
    For years, academics have examined the potential usefulness of traditional criminological theories to explain and prevent cybercrime. Some analytical frameworks from Environmental Criminology and Crime Analysis (ECCA), such as the Routine Activities Approach and Situational Crime Prevention, are frequently used in theoretical and empirical research for this purpose. These efforts have led to a better understanding of how crime opportunities are generated in cyberspace, thus contributing to advancing the discipline. However, with a few exceptions, other ECCA analytical frameworks — especially those based on the idea of geographical place— have been largely ignored. The limited attention devoted to ECCA from a global perspective means its true potential to prevent cybercrime has remained unknown to date. In this thesis we aim to overcome this geographical gap in order to show the potential of some of the essential concepts that underpin the ECCA approach, such as places and crime patterns, to analyse and prevent four crimes committed in cyberspace. To this end, this dissertation is structured in two phases: firstly, a proposal for the transposition of ECCA's fundamental propositions to cyberspace; and secondly, deriving from this approach some hypotheses are contrasted in four empirical studies through Data Science. The first study contrasts a number of premises of repeat victimization in a sample of more than nine million self-reported website defacements. The second examines the precipitators of crime at cyber places where allegedly fixed match results are advertised and the hyperlinked network they form. The third explores the situational contexts where repeated online harassment occurs among a sample of non-university students. And the fourth builds two metadata-driven machine learning models to detect online hate speech in a sample of Twitter messages collected after a terrorist attack. General results show (1) that cybercrimes are not randomly distributed in space, time, or among people; and (2) that the environmental features of the cyber places where they occur determine the emergence of crime opportunities. Overall, we conclude that the ECCA approach and, in particular, its place-based analytical frameworks can also be valid for analysing and preventing crime in cyberspace. We anticipate that this work can guide future research in this area including: the design of secure online environments, the allocation of preventive resources to high-risk cyber places, and the implementation of new evidence- based situational prevention measure

    The Role of Information Communication Technologies (ICTs) in Shaping Identity Threats and Responses

    Get PDF
    With the rising use of social media, people are increasingly experiencing, and responding to, identity threats online. This sometimes leads to online backlash via “cybermobs” or the creation of online social movements that traverse offline. Prior information systems (IS) research on identity threats and responses largely focuses on information communication technology (ICT) implementations within organizations in an offline context. Therefore, we lack understanding of ICT-mediated identity threats and responses and ways to promote healthier and productive interactions online. This two-essay dissertation seeks to fill this gap. Essay 1 combines a review of ICT-mediated identity threats with a qualitative study (based on interviews) to examine: (a) the types of identity threats that ICT enables; and (b) the nature of effects of ICT on identity threats. Essay 2 is a mixed-methods study that investigates how the identity threat and response process (ITARP) can evolve when mediated by ICT. The study is based on event sequence analysis of ICT-mediated ITARP in 50 viral stories where identity threats were triggered online, supplemented by interview data from individuals involved in some of the stories. Results suggest four distinct patterns of ICT-mediated identity threats and responses. Cumulatively, the results from the two essays highlight the role of digital media in influencing both ICT-mediated identity threats as well as the process of identity threat and response
    corecore