3,806 research outputs found

    Solutions to Detect and Analyze Online Radicalization : A Survey

    Full text link
    Online Radicalization (also called Cyber-Terrorism or Extremism or Cyber-Racism or Cyber- Hate) is widespread and has become a major and growing concern to the society, governments and law enforcement agencies around the world. Research shows that various platforms on the Internet (low barrier to publish content, allows anonymity, provides exposure to millions of users and a potential of a very quick and widespread diffusion of message) such as YouTube (a popular video sharing website), Twitter (an online micro-blogging service), Facebook (a popular social networking website), online discussion forums and blogosphere are being misused for malicious intent. Such platforms are being used to form hate groups, racist communities, spread extremist agenda, incite anger or violence, promote radicalization, recruit members and create virtual organi- zations and communities. Automatic detection of online radicalization is a technically challenging problem because of the vast amount of the data, unstructured and noisy user-generated content, dynamically changing content and adversary behavior. There are several solutions proposed in the literature aiming to combat and counter cyber-hate and cyber-extremism. In this survey, we review solutions to detect and analyze online radicalization. We review 40 papers published at 12 venues from June 2003 to November 2011. We present a novel classification scheme to classify these papers. We analyze these techniques, perform trend analysis, discuss limitations of existing techniques and find out research gaps

    Jihadi video and auto-radicalisation: evidence from an exploratory YouTube study

    Get PDF
    Large amounts of jihadi video content on YouTube along with the vast array of relational data that can be gathered opens up innovative avenues for exploration of the support base for political violence. This exploratory study analyses the online supporters of jihad-promoting video content on YouTube, focusing on those posting and commenting upon martyr-promoting material from Iraq. Findings suggest that a majority are under 35 years of age and resident outside the region of the Middle East and North Africa (MENA) with the largest percentage of supporters located in the United States. Evidence to support the potential for online radicalisation is presented. Findings relating to newly formed virtual relationships involving a YouTube user with no apparent prior links to jihadists are discussed

    State of the art 2015: a literature review of social media intelligence capabilities for counter-terrorism

    Get PDF
    Overview This paper is a review of how information and insight can be drawn from open social media sources. It focuses on the specific research techniques that have emerged, the capabilities they provide, the possible insights they offer, and the ethical and legal questions they raise. These techniques are considered relevant and valuable in so far as they can help to maintain public safety by preventing terrorism, preparing for it, protecting the public from it and pursuing its perpetrators. The report also considers how far this can be achieved against the backdrop of radically changing technology and public attitudes towards surveillance. This is an updated version of a 2013 report paper on the same subject, State of the Art. Since 2013, there have been significant changes in social media, how it is used by terrorist groups, and the methods being developed to make sense of it.  The paper is structured as follows: Part 1 is an overview of social media use, focused on how it is used by groups of interest to those involved in counter-terrorism. This includes new sections on trends of social media platforms; and a new section on Islamic State (IS). Part 2 provides an introduction to the key approaches of social media intelligence (henceforth ‘SOCMINT’) for counter-terrorism. Part 3 sets out a series of SOCMINT techniques. For each technique a series of capabilities and insights are considered, the validity and reliability of the method is considered, and how they might be applied to counter-terrorism work explored. Part 4 outlines a number of important legal, ethical and practical considerations when undertaking SOCMINT work

    Understanding the Roots of Radicalisation on Twitter

    Get PDF
    In an increasingly digital world, identifying signs of online extremism sits at the top of the priority list for counter-extremist agencies. Researchers and governments are investing in the creation of advanced information technologies to identify and counter extremism through intelligent large-scale analysis of online data. However, to the best of our knowledge, these technologies are neither based on, nor do they take advantage of, the existing theories and studies of radicalisation. In this paper we propose a computational approach for detecting and predicting the radicalisation influence a user is exposed to, grounded on the notion of ’roots of radicalisation’ from social science models. This approach has been applied to analyse and compare the radicalisation level of 112 pro-ISIS vs.112 “general" Twitter users. Our results show the effectiveness of our proposed algorithms in detecting and predicting radicalisation influence, obtaining up to 0.9 F-1 measure for detection and between 0.7 and 0.8 precision for prediction. While this is an initial attempt towards the effective combination of social and computational perspectives, more work is needed to bridge these disciplines, and to build on their strengths to target the problem of online radicalisation

    Extremism Video Detection In Social Media

    Get PDF
    Social media has grown to become a fundamental part of our lives over the past two decades and with its growth, the misuse of the platform for extremist purposes has become common. The wide reach of social media has allowed extremist groups to take advantage of the platform to spread terrorist propaganda and fear. Therefore, the need for a robust extremist detector in social media is evident. As an attempt to combat this problem, we present techniques to detect various forms of extremism in videos crawled from Twitter, a social media to share short posts. We build upon existing deep neural networks used for action classification and create a model capable of recog- nizing certain common extremism types. Additionally, we also expand on logo/object detection models for the same purpose. We then use these models against a sample space of roughly 2 million unlabelled videos to test the accuracy of these models

    Current Approaches to Terrorist and Violent Extremist Content Among the Global Top 50 Online Content-sharing Services

    Get PDF
    This report provides an overview of the policies and procedures for addressing terrorist and violent extremist content (TVEC) across the global top 50 online content sharing services, with a focus on transparency. It finds that only five of the 50 services issue transparency reports specifically about TVEC, and these five services take different approaches in their reports. These services use different definitions of terrorism and violent extremism, report different types of information, use different measurement and estimation methods, and issue reports with varying frequency and on different timetables. The low number of reporting companies and the variation in what, when and how they report make it impossible to get a clear and complete cross-industry perspective on the efficacy of companies’ measures to combat TVEC online and how they may affect human rights. This situation could be improved if more companies issued TVEC transparency reports and included more comparable information

    An analysis of interactions within and between extreme right communities in social media

    Get PDF
    Many extreme right groups have had an online presence for some time through the use of dedicated websites. This has been accompanied by increased activity in social media websites in recent years, which may enable the dissemination of extreme right content to a wider audience. In this paper, we present exploratory analysis of the activity of a selection of such groups on Twitter, using network representations based on reciprocal follower and mentions interactions. We find that stable communities of related users are present within individual country networks, where these communities are usually associated with variants of extreme right ideology. Furthermore, we also identify the presence of international relationships between certain groups across geopolitical boundaries

    Assessing the influence and reach of digital activity amongst far-right actors: A comparative evaluation of mainstream and ‘free speech’ social media platforms

    Get PDF
    Mainstream social media platforms including Twitter, Facebook and YouTube, became more rigorous at the end of 2020 in implementing content moderation measures in efforts to combat false information related to the Covid-19 virus and election security in the United States of America. Some users viewed these measures as hostile towards various ideologies, prompting them to adopt alternative platforms for viewing and disseminating content (Abril 2021; Daly and Fischer, 2021). In 2020, the US Department of Homeland Security identified white supremacist extremists (WSE) as “the most persistent and lethal threat in the homeland” (DHS, 2020, p. 17). WSE disseminate their messages to a broader public by stoking grievances about race, immigration, multiculturalism and police-related policy issues, while also coordinating with networks of similar groups to carry their messages further (DHS, 2020). Current research lacks an understanding of the role these alternative platforms play in shaping, disseminating and amplifying extremist messages. This study utilized socio-computational methods to compare and contrast user behavior on the mainstream platforms, YouTube and Twitter, with the alternative social media platform Parler, during the two months before and after the 2021 January 6th U.S. Capitol attack. Toxicity assessment, topic stream analysis, social network analysis, and social cyber forensic analysis helped identify key far-right actors and illuminated cyber coordination and mobilization within these social networks. The findings reveal some surprising insights including that both toxicity and posting activity were far greater on mainstream platforms and that Parler displayed an extremely high rate of cross-media and cross-platform posting
    • 

    corecore