44 research outputs found

    Solutions to Detect and Analyze Online Radicalization : A Survey

    Full text link
    Online Radicalization (also called Cyber-Terrorism or Extremism or Cyber-Racism or Cyber- Hate) is widespread and has become a major and growing concern to the society, governments and law enforcement agencies around the world. Research shows that various platforms on the Internet (low barrier to publish content, allows anonymity, provides exposure to millions of users and a potential of a very quick and widespread diffusion of message) such as YouTube (a popular video sharing website), Twitter (an online micro-blogging service), Facebook (a popular social networking website), online discussion forums and blogosphere are being misused for malicious intent. Such platforms are being used to form hate groups, racist communities, spread extremist agenda, incite anger or violence, promote radicalization, recruit members and create virtual organi- zations and communities. Automatic detection of online radicalization is a technically challenging problem because of the vast amount of the data, unstructured and noisy user-generated content, dynamically changing content and adversary behavior. There are several solutions proposed in the literature aiming to combat and counter cyber-hate and cyber-extremism. In this survey, we review solutions to detect and analyze online radicalization. We review 40 papers published at 12 venues from June 2003 to November 2011. We present a novel classification scheme to classify these papers. We analyze these techniques, perform trend analysis, discuss limitations of existing techniques and find out research gaps

    KAJIAN LITERATUR PENERAPAN SOCIAL MEDIA NETWORK DAN INFORMATION SECURITY INTELLIGENT UNTUK MENGIDENTIFIKASI POTENSI RADIKALISASI ONLINE

    Get PDF
    Kajian literatur penelitian terdahulu menunjukkan bahwa berbagai platform media sosial di Internet seperti Twitter, Tumblr, Facebook, YouTube, Blog dan forum diskusi disalahgunakan oleh kelompok-kelompok ekstremis untuk menyebarkan kepercayaan dan ideologi mereka, mempromosikan radikalisasi, merekrut anggota dan menciptakan komunitas virtual online. Selama lebih dari 10 tahun terakhir penggunaan analisa jaringan media sosial untuk memprediksi dan mengidentifikasi radikalisasi online adalah area yang telah menarik perhatian beberapa peneliti selama 10 tahun terakhir. Ada beberapa algoritma, teknik, dan alat yang telah diusulkan dalam literatur yang ada untuk melawan dan memerangi cyber-ekstrimis. Dalam jurnal ini, penulis melakukan tinjauan literatur dari semua teknik yang ada dan melakukan analisis yang komprehensif untuk memahami keadaan, tren dan kesenjangan penelitian. Dalam jurnal ini dilakukan karakterisasi, klasifikasi, dan meta-anlaysis dari puluhan  jurnal untuk mendapatkan pemahaman yang lebih baik tentang literatur tentang pendeteksian ektrimis melalui sosial media intelligent

    State of the art 2015: a literature review of social media intelligence capabilities for counter-terrorism

    Get PDF
    Overview This paper is a review of how information and insight can be drawn from open social media sources. It focuses on the specific research techniques that have emerged, the capabilities they provide, the possible insights they offer, and the ethical and legal questions they raise. These techniques are considered relevant and valuable in so far as they can help to maintain public safety by preventing terrorism, preparing for it, protecting the public from it and pursuing its perpetrators. The report also considers how far this can be achieved against the backdrop of radically changing technology and public attitudes towards surveillance. This is an updated version of a 2013 report paper on the same subject, State of the Art. Since 2013, there have been significant changes in social media, how it is used by terrorist groups, and the methods being developed to make sense of it.  The paper is structured as follows: Part 1 is an overview of social media use, focused on how it is used by groups of interest to those involved in counter-terrorism. This includes new sections on trends of social media platforms; and a new section on Islamic State (IS). Part 2 provides an introduction to the key approaches of social media intelligence (henceforth ‘SOCMINT’) for counter-terrorism. Part 3 sets out a series of SOCMINT techniques. For each technique a series of capabilities and insights are considered, the validity and reliability of the method is considered, and how they might be applied to counter-terrorism work explored. Part 4 outlines a number of important legal, ethical and practical considerations when undertaking SOCMINT work

    A Bibliometric Analysis of Online Extremism Detection

    Get PDF
    The Internet has become an essential part of modern communication. People are sharing ideas, thoughts, and beliefs easily, using social media. This sharing of ideas has raised a big problem like the spread of the radicalized extremist ideas. The various extremist organizations use the social media as a propaganda tool. The extremist organizations actively radicalize and recruit youths by sharing inciting material on social media. Extremist organizations use social media to influence people to carry out lone-wolf attacks. Social media platforms employ various strategies to identify and remove the extremist content. But due to the sheer amount of data and loopholes in detection strategies, extremism remain undetected for a significant time. Thus, there is a need of accurate detection of extremism on social media. This study provides Bibliometric analysis and systematic mappings of existing literature for radicalisation or extremism detection. Bibliometric analysis of Machine Learning and Deep Learning articles in extremism detection are considered. This is performed using SCOPUS database, with the tools like Sciencescape and VOS Viewer. It is observed that the current literature on extremist detection is focused on a particular ideology. Though it is noted that few researchers are working in the extremism detection area, it is preferred among researchers in the recent years

    Improving hate speech detection using machine and deep learning techniques: A preliminary study

    Get PDF
    The increasing use of social media and information sharing has given major benefits to humanity. However, this has also given rise to a variety of challenges including the spreading and sharing of hate speech messages. Thus, to solve this emerging issue in social media, recent studies employed a variety of feature engineering techniques and machine learning or deep learning algorithms to automatically detect the hate speech messages on different datasets. However, most of the studies classify the hate speech related message using existing feature engineering approaches and suffer from the low classification results. This is because, the existing feature engineering approaches suffer from the word order problem and word context problem. In this research, identifying hateful content from latest tweets of twitter and classify them into several categories is studied. The categories identified are; Ethnicity, Nationality, Religion, Gender, Sexual Orientation, Disability and Other. These categories are further classified to identify the targets of hate speech such as Black, White, Asian belongs to Ethnicity and Muslims, Jews, Christians can be classified from Religion Category. An evaluation will be performed among the hateful content identified using deep learning model LSTM and traditional machine learning models which includes Linear SVC, Logistic Regression, Random Forest and Multinomial Naišve Bayes to measure their accuracy and precision and their comparison on the live extracted tweets from twitter which will be used as our test dataset

    "how over is it?" Understanding the Incel Community on YouTube

    Get PDF
    YouTube is by far the largest host of user-generated video content worldwide. Alas, the platform has also come under fire for hosting inappropriate, toxic, and hateful content. One community that has often been linked to sharing and publishing hateful and misogynistic content are the Involuntary Celibates (Incels), a loosely defined movement ostensibly focusing on men's issues. In this paper, we set out to analyze the Incel community on YouTube by focusing on this community's evolution over the last decade and understanding whether YouTube's recommendation algorithm steers users towards Incel-related videos. We collect videos shared on Incel communities within Reddit and perform a data-driven characterization of the content posted on YouTube. Among other things, we find that the Incel community on YouTube is getting traction and that, during the last decade, the number of Incel-related videos and comments rose substantially. We also find that users have a 6.3% chance of being suggested an Incel-related video by YouTube's recommendation algorithm within five hops when starting from a non Incel-related video. Overall, our findings paint an alarming picture of online radicalization: not only Incel activity is increasing over time, but platforms may also play an active role in steering users towards such extreme content

    What are the roles of the Internet in terrorism? Measuring online behaviours of convicted UK terrorists

    Get PDF
    Using a unique dataset of 227 convicted UK-based terrorists, this report fills a large gap in the existing literature. Using descriptive statistics, we first outline the degree to which various online activities related to radicalisation were present within the sample. The results illustrate the variance in behaviours often attributed to ‘online radicalisation’. Second, we conducted a smallest-space analysis to illustrate two clusters of commonly co-occurring behaviours that delineate behaviours from those directly associated with attack planning. Third, we conduct a series of bivariate and multivariate analyses to question whether those who interact virtually with like-minded individuals or learn online, exhibit markedly different experiences (e.g. radicalisation, event preparation, attack outcomes) than those who do not

    “You Know What to Do”:Proactive Detection of YouTube Videos Targeted by Coordinated Hate Attacks

    Get PDF
    Video sharing platforms like YouTube are increasingly targeted by aggression and hate attacks. Prior work has shown how these attacks often take place as a result of "raids," i.e., organized efforts by ad-hoc mobs coordinating from third-party communities. Despite the increasing relevance of this phenomenon, however, online services often lack effective countermeasures to mitigate it. Unlike well-studied problems like spam and phishing, coordinated aggressive behavior both targets and is perpetrated by humans, making defense mechanisms that look for automated activity unsuitable. Therefore, the de-facto solution is to reactively rely on user reports and human moderation. In this paper, we propose an automated solution to identify YouTube videos that are likely to be targeted by coordinated harassers from fringe communities like 4chan. First, we characterize and model YouTube videos along several axes (metadata, audio transcripts, thumbnails) based on a ground truth dataset of videos that were targeted by raids. Then, we use an ensemble of classifiers to determine the likelihood that a video will be raided with very good results (AUC up to 94%). Overall, our work provides an important first step towards deploying proactive systems to detect and mitigate coordinated hate attacks on platforms like YouTube

    Investigating the cross-platform behaviours of online hate groups

    Get PDF
    The past few decades have established how digital technologies and platforms have provided an effective medium for spreading hateful content. Despite efforts from law-enforcement agencies and platform developers to remove or limit such content, online hate ideologies and extremist narratives are still being linked to several catastrophic consequences around the world. The concept of online hate is still considered a complex phenomenon, with its definition evolving across several theoretical paradigms and disciplines, and spanning multiple forms of victimisation. Due to this complexity, research into online hate is fragmented throughout numerous disciplines, including computational social science. Previous research has demonstrated how online hate thrives globally through self-organised, scalable clusters that interconnect to form robust networks spread across multiple social-media platforms, countries, and languages. Although several extensive approaches and methods have been proposed in previous studies for the analysis of online hate, limited research has investigated how hateful behaviours and content compare and relate across different online platforms. This thesis aimed to address these limitations by developing a cross-platform analysis framework for online-hate researchers to gain a clearer understanding of the dynamics of the global hate ecosystem. More specifically, the designing of this framework involved examining the main functionalities of existing online-hate analysis frameworks, and the extent to which they address cross-platform hate. The strengths and limitations of these approaches then informed the functional requirements of the cross-platform analysis framework. To demonstrate how the framework can provide novel insights into online-hate research, this thesis also details its application to various case studies, including online hate from white-supremacy-supporting users and environments spread during the 2020 US election and the COVID-19 pandemic. This comprises a comparative analysis of hateful content in terms of the major topics of discussion and psycho-linguistic properties across different types of online platforms using natural language processing techniques. Additionally, the framework is used to explore networks of shared content, particularly through the posting of URLs, by harnessing social-network analysis methods. Finally, the cross-platform analysis framework is validated using a list of validation criteria to evaluate its practicality in investigating hateful content and providing novel insights into the field of online hate. The findings from this can be used to develop more effective analysis tools for online-hate researchers and law-enforcement agencies
    corecore