672 research outputs found

    Solutions to Detect and Analyze Online Radicalization : A Survey

    Full text link
    Online Radicalization (also called Cyber-Terrorism or Extremism or Cyber-Racism or Cyber- Hate) is widespread and has become a major and growing concern to the society, governments and law enforcement agencies around the world. Research shows that various platforms on the Internet (low barrier to publish content, allows anonymity, provides exposure to millions of users and a potential of a very quick and widespread diffusion of message) such as YouTube (a popular video sharing website), Twitter (an online micro-blogging service), Facebook (a popular social networking website), online discussion forums and blogosphere are being misused for malicious intent. Such platforms are being used to form hate groups, racist communities, spread extremist agenda, incite anger or violence, promote radicalization, recruit members and create virtual organi- zations and communities. Automatic detection of online radicalization is a technically challenging problem because of the vast amount of the data, unstructured and noisy user-generated content, dynamically changing content and adversary behavior. There are several solutions proposed in the literature aiming to combat and counter cyber-hate and cyber-extremism. In this survey, we review solutions to detect and analyze online radicalization. We review 40 papers published at 12 venues from June 2003 to November 2011. We present a novel classification scheme to classify these papers. We analyze these techniques, perform trend analysis, discuss limitations of existing techniques and find out research gaps

    An analysis of interactions within and between extreme right communities in social media

    Get PDF
    Many extreme right groups have had an online presence for some time through the use of dedicated websites. This has been accompanied by increased activity in social media websites in recent years, which may enable the dissemination of extreme right content to a wider audience. In this paper, we present exploratory analysis of the activity of a selection of such groups on Twitter, using network representations based on reciprocal follower and mentions interactions. We find that stable communities of related users are present within individual country networks, where these communities are usually associated with variants of extreme right ideology. Furthermore, we also identify the presence of international relationships between certain groups across geopolitical boundaries

    "how over is it?" Understanding the Incel Community on YouTube

    Get PDF
    YouTube is by far the largest host of user-generated video content worldwide. Alas, the platform has also come under fire for hosting inappropriate, toxic, and hateful content. One community that has often been linked to sharing and publishing hateful and misogynistic content are the Involuntary Celibates (Incels), a loosely defined movement ostensibly focusing on men's issues. In this paper, we set out to analyze the Incel community on YouTube by focusing on this community's evolution over the last decade and understanding whether YouTube's recommendation algorithm steers users towards Incel-related videos. We collect videos shared on Incel communities within Reddit and perform a data-driven characterization of the content posted on YouTube. Among other things, we find that the Incel community on YouTube is getting traction and that, during the last decade, the number of Incel-related videos and comments rose substantially. We also find that users have a 6.3% chance of being suggested an Incel-related video by YouTube's recommendation algorithm within five hops when starting from a non Incel-related video. Overall, our findings paint an alarming picture of online radicalization: not only Incel activity is increasing over time, but platforms may also play an active role in steering users towards such extreme content

    A Bibliometric Analysis of Online Extremism Detection

    Get PDF
    The Internet has become an essential part of modern communication. People are sharing ideas, thoughts, and beliefs easily, using social media. This sharing of ideas has raised a big problem like the spread of the radicalized extremist ideas. The various extremist organizations use the social media as a propaganda tool. The extremist organizations actively radicalize and recruit youths by sharing inciting material on social media. Extremist organizations use social media to influence people to carry out lone-wolf attacks. Social media platforms employ various strategies to identify and remove the extremist content. But due to the sheer amount of data and loopholes in detection strategies, extremism remain undetected for a significant time. Thus, there is a need of accurate detection of extremism on social media. This study provides Bibliometric analysis and systematic mappings of existing literature for radicalisation or extremism detection. Bibliometric analysis of Machine Learning and Deep Learning articles in extremism detection are considered. This is performed using SCOPUS database, with the tools like Sciencescape and VOS Viewer. It is observed that the current literature on extremist detection is focused on a particular ideology. Though it is noted that few researchers are working in the extremism detection area, it is preferred among researchers in the recent years

    “You Know What to Do”:Proactive Detection of YouTube Videos Targeted by Coordinated Hate Attacks

    Get PDF
    Video sharing platforms like YouTube are increasingly targeted by aggression and hate attacks. Prior work has shown how these attacks often take place as a result of "raids," i.e., organized efforts by ad-hoc mobs coordinating from third-party communities. Despite the increasing relevance of this phenomenon, however, online services often lack effective countermeasures to mitigate it. Unlike well-studied problems like spam and phishing, coordinated aggressive behavior both targets and is perpetrated by humans, making defense mechanisms that look for automated activity unsuitable. Therefore, the de-facto solution is to reactively rely on user reports and human moderation. In this paper, we propose an automated solution to identify YouTube videos that are likely to be targeted by coordinated harassers from fringe communities like 4chan. First, we characterize and model YouTube videos along several axes (metadata, audio transcripts, thumbnails) based on a ground truth dataset of videos that were targeted by raids. Then, we use an ensemble of classifiers to determine the likelihood that a video will be raided with very good results (AUC up to 94%). Overall, our work provides an important first step towards deploying proactive systems to detect and mitigate coordinated hate attacks on platforms like YouTube

    Down the (white) rabbit hole: the extreme right and online recommender systems

    Get PDF
    In addition to hosting user-generated video content, YouTube provides recommendation services, where sets of related and recommended videos are presented to users, based on factors such as co-visitation count and prior viewing history. This article is specifically concerned with extreme right (ER) video content, portions of which contravene hate laws and are thus illegal in certain countries, which are recommended by YouTube to some users. We develop a categorization of this content based on various schema found in a selection of academic literature on the ER, which is then used to demonstrate the political articulations of YouTube’s recommender system, particularly the narrowing of the range of content to which users are exposed and the potential impacts of this. For this purpose, we use two data sets of English and German language ER YouTube channels, along with channels suggested by YouTube’s related video service. A process is observable whereby users accessing an ER YouTube video are likely to be recommended further ER content, leading to immersion in an ideological bubble in just a few short clicks. The evidence presented in this article supports a shift of the almost exclusive focus on users as content creators and protagonists in extremist cyberspaces to also consider online platform providers as important actors in these same spaces

    KAJIAN LITERATUR PENERAPAN SOCIAL MEDIA NETWORK DAN INFORMATION SECURITY INTELLIGENT UNTUK MENGIDENTIFIKASI POTENSI RADIKALISASI ONLINE

    Get PDF
    Kajian literatur penelitian terdahulu menunjukkan bahwa berbagai platform media sosial di Internet seperti Twitter, Tumblr, Facebook, YouTube, Blog dan forum diskusi disalahgunakan oleh kelompok-kelompok ekstremis untuk menyebarkan kepercayaan dan ideologi mereka, mempromosikan radikalisasi, merekrut anggota dan menciptakan komunitas virtual online. Selama lebih dari 10 tahun terakhir penggunaan analisa jaringan media sosial untuk memprediksi dan mengidentifikasi radikalisasi online adalah area yang telah menarik perhatian beberapa peneliti selama 10 tahun terakhir. Ada beberapa algoritma, teknik, dan alat yang telah diusulkan dalam literatur yang ada untuk melawan dan memerangi cyber-ekstrimis. Dalam jurnal ini, penulis melakukan tinjauan literatur dari semua teknik yang ada dan melakukan analisis yang komprehensif untuk memahami keadaan, tren dan kesenjangan penelitian. Dalam jurnal ini dilakukan karakterisasi, klasifikasi, dan meta-anlaysis dari puluhan  jurnal untuk mendapatkan pemahaman yang lebih baik tentang literatur tentang pendeteksian ektrimis melalui sosial media intelligent

    Automated Identification and Reconstruction of YouTube Video Access

    Get PDF
    YouTube is one of the most popular video-sharing websites on the Internet, allowing users to upload, view and share videos with other users all over the world. YouTube contains many different types of videos, from homemade sketches to instructional and educational tutorials, and therefore attracts a wide variety of users with different interests. The majority of YouTube visits are perfectly innocent, but there may be circumstances where YouTube video access is related to a digital investigation, e.g. viewing instructional videos on how to perform potentially unlawful actions or how to make unlawful articles. When a user accesses a YouTube video through their browser, certain digital artefacts relating to that video access may be left on their system in a number of different locations. However, there has been very little research published in the area of YouTube video artefacts. The paper discusses the identification of some of the artefacts that are left by the Internet Explorer web browser on a Windows system after accessing a YouTube video. The information that can be recovered from these artefacts can include the video ID, the video name and possibly a cached copy of the video itself. In addition to identifying the artefacts that are left, the paper also investigates how these artefacts can be brought together and analysed to infer specifics about the user’s interaction with the YouTube website, for example whether the video was searched for or visited as a result of a suggestion after viewing a previous video. The result of this research is a Python based prototype that will analyse a mounted disk image, automatically extract the artefacts related to YouTube visits and produce a report summarising the YouTube video accesses on a system
    corecore