9,169 research outputs found

    Effectiveness of dismantling strategies on moderated vs. unmoderated online social platforms

    Full text link
    Online social networks are the perfect test bed to better understand large-scale human behavior in interacting contexts. Although they are broadly used and studied, little is known about how their terms of service and posting rules affect the way users interact and information spreads. Acknowledging the relation between network connectivity and functionality, we compare the robustness of two different online social platforms, Twitter and Gab, with respect to dismantling strategies based on the recursive censor of users characterized by social prominence (degree) or intensity of inflammatory content (sentiment). We find that the moderated (Twitter) vs unmoderated (Gab) character of the network is not a discriminating factor for intervention effectiveness. We find, however, that more complex strategies based upon the combination of topological and content features may be effective for network dismantling. Our results provide useful indications to design better strategies for countervailing the production and dissemination of anti-social content in online social platforms

    Understanding the Roots of Radicalisation on Twitter

    Get PDF
    In an increasingly digital world, identifying signs of online extremism sits at the top of the priority list for counter-extremist agencies. Researchers and governments are investing in the creation of advanced information technologies to identify and counter extremism through intelligent large-scale analysis of online data. However, to the best of our knowledge, these technologies are neither based on, nor do they take advantage of, the existing theories and studies of radicalisation. In this paper we propose a computational approach for detecting and predicting the radicalisation influence a user is exposed to, grounded on the notion of ’roots of radicalisation’ from social science models. This approach has been applied to analyse and compare the radicalisation level of 112 pro-ISIS vs.112 “general" Twitter users. Our results show the effectiveness of our proposed algorithms in detecting and predicting radicalisation influence, obtaining up to 0.9 F-1 measure for detection and between 0.7 and 0.8 precision for prediction. While this is an initial attempt towards the effective combination of social and computational perspectives, more work is needed to bridge these disciplines, and to build on their strengths to target the problem of online radicalisation

    Social Justice Documentary: Designing for Impact

    Get PDF
    Explores current methodologies for assessing social issue documentary films by combining strategic design and evaluation of multiplatform outreach and impact, including documentaries' role in network- and field-building. Includes six case studies

    State of the art 2015: a literature review of social media intelligence capabilities for counter-terrorism

    Get PDF
    Overview This paper is a review of how information and insight can be drawn from open social media sources. It focuses on the specific research techniques that have emerged, the capabilities they provide, the possible insights they offer, and the ethical and legal questions they raise. These techniques are considered relevant and valuable in so far as they can help to maintain public safety by preventing terrorism, preparing for it, protecting the public from it and pursuing its perpetrators. The report also considers how far this can be achieved against the backdrop of radically changing technology and public attitudes towards surveillance. This is an updated version of a 2013 report paper on the same subject, State of the Art. Since 2013, there have been significant changes in social media, how it is used by terrorist groups, and the methods being developed to make sense of it.  The paper is structured as follows: Part 1 is an overview of social media use, focused on how it is used by groups of interest to those involved in counter-terrorism. This includes new sections on trends of social media platforms; and a new section on Islamic State (IS). Part 2 provides an introduction to the key approaches of social media intelligence (henceforth ‘SOCMINT’) for counter-terrorism. Part 3 sets out a series of SOCMINT techniques. For each technique a series of capabilities and insights are considered, the validity and reliability of the method is considered, and how they might be applied to counter-terrorism work explored. Part 4 outlines a number of important legal, ethical and practical considerations when undertaking SOCMINT work

    Histories of hating

    Get PDF
    This roundtable discussion presents a dialogue between digital culture scholars on the seemingly increased presence of hating and hate speech online. Revolving primarily around the recent #GamerGate campaign of intensely misogynistic discourse aimed at women in video games, the discussion suggests that the current moment for hate online needs to be situated historically. From the perspective of intersecting cultural histories of hate speech, discrimination, and networked communication, we interrogate the ontological specificity of online hating before going on to explore potential responses to the harmful consequences of hateful speech. Finally, a research agenda for furthering the historical understandings of contemporary online hating is suggested in order to address the urgent need for scholarly interventions into the exclusionary cultures of networked media

    User Engagement and the Toxicity of Tweets

    Full text link
    Twitter is one of the most popular online micro-blogging and social networking platforms. This platform allows individuals to freely express opinions and interact with others regardless of geographic barriers. However, with the good that online platforms offer, also comes the bad. Twitter and other social networking platforms have created new spaces for incivility. With the growing interest on the consequences of uncivil behavior online, understanding how a toxic comment impacts online interactions is imperative. We analyze a random sample of more than 85,300 Twitter conversations to examine differences between toxic and non-toxic conversations and the relationship between toxicity and user engagement. We find that toxic conversations, those with at least one toxic tweet, are longer but have fewer individual users contributing to the dialogue compared to the non-toxic conversations. However, within toxic conversations, toxicity is positively associated with more individual Twitter users participating in conversations. This suggests that overall, more visible conversations are more likely to include toxic replies. Additionally, we examine the sequencing of toxic tweets and its impact on conversations. Toxic tweets often occur as the main tweet or as the first reply, and lead to greater overall conversation toxicity. We also find a relationship between the toxicity of the first reply to a toxic tweet and the toxicity of the conversation, such that whether the first reply is toxic or non-toxic sets the stage for the overall toxicity of the conversation, following the idea that hate can beget hate

    'Enclaves of exposure' : a conceptual viewpoint to explore cross-ideology exposure on social network sites

    Get PDF
    Previous studies indicate mixed results as to whether social media constitutes ideological echo chambers. This inconsistency may arise due to a lack of theoretical frames that acknowledge the fact that contextual and technological factors allow varying levels of cross-cutting exposure on social media. This study suggests an alternative theoretical lens, divergence of exposure – co-existence of user groups with varying degrees of cross-ideology exposure related to the same issue – as a notion that serves as an overarching perspective. We suggest that mediated spaces, such as social media groups, can serve as enclaves of exposure that offer affordances for formation of user groups irrespective of offline social distinctions. Yet social elements cause some of them to display more cross-ideology exchange than others. To establish this claim empirically, we examine two Facebook page user networks (‘Sri Lanka’s Killing Fields’ and ‘Sri Lankans Hate Channel 4’) that emerged in response to Sri Lanka’s Killing Fields, a controversial documentary broadcast by Channel 4 that accused Sri Lankan armed forces of human rights violation during the final stage of the separatist conflict in Sri Lanka. The results showed that the Facebook group network that supported the claims made by Channel 4 is more diverse in terms of ethnic composition, and is neither assortative nor disassortative across ethnicity, suggesting the presence of cross-ethnicity interaction. The pro-allegiant group was largely homogenous and less active, resembling a passive echo chamber. ‘Social mediation’ repurposes enclaves of exposure to represent polarized ideologies where some venues display cross-ideology exposure, while others resemble an ‘echo chamber’

    HATE CRIMES IN SOCIAL MEDIA: A CRIMINOLOGICAL REVIEW

    Get PDF
    Hate crime in social media is a common phenomenon around the world. Hate crime against different races, minorities, and ethnic people or groups is now spreading via social media platforms in form of hate speech. The anonymity of the internet user and the availability of the internet make this crime very easy to commit by the offender. This paper aims to find out the targets of hate crime in social media and its effect on the victims. The people who are victimized by hate crime in social media because of their race, gender especially female and religious minority. The study has done by secondary data analysis. Hate crime in social media has a devastating effect on the victim both physically and mainly psychologically which makes them mentally inferior, degradation of self-esteem, and also create a fear of violence in their mind. The existing laws against hate crime in social media should be implemented more precise way and different types of detection methods should be applied to detect the offenders. This paper can be helpful to increase awareness about hate crime in social media which is unnoticed by many researchers in our country and can lead a way to stop victimization in social media platforms
    • 

    corecore