140 research outputs found

    Characterizing the Use of Images in State-Sponsored Information Warfare Operations by Russian Trolls on Twitter

    Get PDF
    State-sponsored organizations are increasingly linked to efforts aimed to exploit social media for information warfare and manipulating public opinion. Typically, their activities rely on a number of social network accounts they control, aka trolls, that post and interact with other users disguised as "regular" users. These accounts often use images and memes, along with textual content, in order to increase the engagement and the credibility of their posts. In this paper, we present the first study of images shared by state-sponsored accounts by analyzing a ground truth dataset of 1.8M images posted to Twitter by accounts controlled by the Russian Internet Research Agency. First, we analyze the content of the images as well as their posting activity. Then, using Hawkes Processes, we quantify their influence on popular Web communities like Twitter, Reddit, 4chan's Politically Incorrect board (/pol/), and Gab, with respect to the dissemination of images. We find that the extensive image posting activity of Russian trolls coincides with real-world events (e.g., the Unite the Right rally in Charlottesville), and shed light on their targets as well as the content disseminated via images. Finally, we show that the trolls were more effective in disseminating politics-related imagery than other images.Comment: To appear at the 14th International AAAI Conference on Web and Social Media (ICWSM 2020). Please cite accordingl

    Characterizing the Use of Images in State-Sponsored Information Warfare Operations by {R}ussian {Tr}olls on Twitter

    No full text
    State-sponsored organizations are increasingly linked to efforts aimed to exploit social media for information warfare and manipulating public opinion. Typically, their activities rely on a number of social network accounts they control, aka trolls, that post and interact with other users disguised as "regular" users. These accounts often use images and memes, along with textual content, in order to increase the engagement and the credibility of their posts. In this paper, we present the first study of images shared by state-sponsored accounts by analyzing a ground truth dataset of 1.8M images posted to Twitter by accounts controlled by the Russian Internet Research Agency. First, we analyze the content of the images as well as their posting activity. Then, using Hawkes Processes, we quantify their influence on popular Web communities like Twitter, Reddit, 4chan's Politically Incorrect board (/pol/), and Gab, with respect to the dissemination of images. We find that the extensive image posting activity of Russian trolls coincides with real-world events (e.g., the Unite the Right rally in Charlottesville), and shed light on their targets as well as the content disseminated via images. Finally, we show that the trolls were more effective in disseminating politics-related imagery than other images

    Who let the trolls out? Towards understanding state-sponsored trolls

    Get PDF
    Recent evidence has emerged linking coordinated campaigns by state-sponsored actors to manipulate public opinion on the Web. Campaigns revolving around major political events are enacted via mission-focused ?trolls." While trolls are involved in spreading disinformation on social media, there is little understanding of how they operate, what type of content they disseminate, how their strategies evolve over time, and how they influence the Web's in- formation ecosystem. In this paper, we begin to address this gap by analyzing 10M posts by 5.5K Twitter and Reddit users identified as Russian and Iranian state-sponsored trolls. We compare the behavior of each group of state-sponsored trolls with a focus on how their strategies change over time, the different campaigns they embark on, and differences between the trolls operated by Russia and Iran. Among other things, we find: 1) that Russian trolls were pro-Trump while Iranian trolls were anti-Trump; 2) evidence that campaigns undertaken by such actors are influenced by real-world events; and 3) that the behavior of such actors is not consistent over time, hence detection is not straightforward. Using Hawkes Processes, we quantify the influence these accounts have on pushing URLs on four platforms: Twitter, Reddit, 4chan's Politically Incorrect board (/pol/), and Gab. In general, Russian trolls were more influential and efficient in pushing URLs to all the other platforms with the exception of /pol/ where Iranians were more influential. Finally, we release our source code to ensure the reproducibility of our results and to encourage other researchers to work on understanding other emerging kinds of state-sponsored troll accounts on Twitter.https://arxiv.org/pdf/1811.03130.pdfAccepted manuscrip

    The International Law of Rabble Rousing

    Get PDF
    This Essay offers an account of rabble-rousing, a novel information warfare operation worthy of its own classification, and explores the extent to which contemporary international law and available technologies are capable of addressing the threat that this tactic poses to public world order. This Essay proceeds as follows. Part I provides a definition of rabblerousing strategies, highlighting the ways by which they are uniquely defined from other forms of information warfare. It then proceeds to highlight the dangers associated with the practice. Part II moves to examine whether rabble-rousing can be recognized as an internationally wrongful act under the traditional paradigms of public international law. It looks at the prohibitions on coercive intervention, transboundary harm, and subversive propaganda as well as the principle of sovereignty and the human rights to self-determination and freedom of expression in order to determine the legality of rabble-rousing operations under international law. This Part highlights the limits of traditional interpretations of the above legal regimes and proposes how certain adaptations to the law could potentially better capture the examined phenomenon. Part III assesses current technological capabilities and proposes policy solutions, which will be necessary for States to practically defend against this activity regardless of whether or not wrongfulness can be established. Part IV concludes the argument. Ultimately, we hope that this Essay will serve as a call-to-action for scholars and practitioners to expand on their existing taxonomies of the informational theater of conflict, and to promote nuanced solutions that take all considerations into account

    UNDERSTANDING THE ROLE OF OVERT AND COVERT ONLINE COMMUNICATION IN INFORMATION OPERATIONS

    Get PDF
    This thesis combines regression, sentiment, and social network analysis to explore how Russian online media agencies, both overt and covert, affect online communication on Twitter when North Atlantic Treaty Organization (NATO) exercises occur. It explores the relations between the average sentiment of tweets and the activities of Russia’s overt and covert online media agencies. The data source for this research is the Naval Postgraduate School’s licensed Twitter archive and open-source information about the NATO exercises timeline. Publicly available lexicons of positive and negative terms helped to measure the sentiment in tweets. The thesis finds that Russia’s covert media agencies, such as the Internet Research Agency, have a great impact on and likelihood for changing the sentiment of network users about NATO than do the overt Russian media outlets. The sentiment during NATO exercises becomes more negative as the activity of Russian media organizations, whether covert or overt, increases. These conclusions suggest that close tracking and examination of the activities of Russia’s online media agencies provide the necessary base for detecting ongoing information operations. Further refining of the analytical methods can deliver a more comprehensive outcome. These refinements could employ machine learning or natural language processing algorithms that can increase the precision of the sentiment measurement probability and timely identification of trolls’ accounts.Podpolkovnik, Bulgarian Air ForceApproved for public release. Distribution is unlimited

    Characterizing and Detecting State-Sponsored Troll Activity on Social Media

    Full text link
    The detection of state-sponsored trolls acting in information operations is an unsolved and critical challenge for the research community, with repercussions that go beyond the online realm. In this paper, we propose a novel AI-based solution for the detection of state-sponsored troll accounts, which consists of two steps. The first step aims at classifying trajectories of accounts' online activities as belonging to either a state-sponsored troll or to an organic user account. In the second step, we exploit the classified trajectories to compute a metric, namely "troll score", which allows us to quantify the extent to which an account behaves like a state-sponsored troll. As a study case, we consider the troll accounts involved in the Russian interference campaign during the 2016 US Presidential election, identified as Russian trolls by the US Congress. Experimental results show that our approach identifies accounts' trajectories with an AUC close to 99\% and, accordingly, classify Russian trolls and organic users with an AUC of 97\%. Finally, we evaluate whether the proposed solution can be generalized to different contexts (e.g., discussions about Covid-19) and generic misbehaving users, showing promising results that will be further expanded in our future endeavors.Comment: 15 page

    Report of the Select Committee on Intelligence United States Senate on Russian Active Measures Campaigns and Interference in the 2016 U.S. Election: Volume 2: Russia\u27s Use of Social Media, with Additional Views

    Get PDF
    In 2016, Russian operatives associated with the St. Petersburg-based Internet Research Agency (IRA) used social media to conduct an information warfare campaign designed to spread disinformation and societal division in the United States. Masquerading as Americans, these operatives used targeted advertisements, intentionally falsified news articles, self-generated content, and social media platform tools to interact with and attempt to deceive tens of millions of social media users in the United States. This campaign sought to polarize Americans on the basis of societal, ideological, and racial differences, provoked real world events, and was part of a foreign government\u27s covert support of Russia\u27s favored candidate in the U.S. presidential election. The Senate Select Committee on Intelligence undertook a study of these events, consistent with its congressional mandate to oversee and conduct oversight of the intelligence activities and programs of the United States Government, to include the effectiveness of the Intelligence Community\u27s counterintelligence function. In addition to the work of the professional staff of the Committee, the Committee\u27s findings drew from the input of cybersecurity professionals, social media companies, U.S. law enforcement and intelligence agencies, and researchers and experts in social network analysis, political content, disinformation, hate speech, algorithms, and automation, working under the auspices of the Committee\u27s Technical Advisory Group (TAG). The Committee found, that the IRA sought to influence the 2016 U.S. presidential election by harming Hillary Clinton\u27s chances of success and supporting Donald Trump at the direction of the Kremlin. The Committee found that the IRA\u27 s :lnformation warfare campaign was broad in scope and entailed objectives beyond the result of the 2016 presidential election. Further, the Committee\u27s analysis of the IRA\u27s activities on social media supports the key judgments of the January 6, 2017 Intelligence Community Assessment, Assessing Russian Activities and Intentions in Recent US Elections, that Russia\u27s, goals were to undermine public faith in the US democratic process, denigrate Secretary Clinton,· and harm her electability and potential presidency. However, where the Intelligence Community assessed that the Russian government aspired to help President-elect Trump\u27s election chances when possible by discrediting Secretary Clinton and publicly contrasting her unfavorably to him, the Committee found that IRA social media activity was overtly and almost invariably supportive of then-candidate Trump, and to the detriment .of Secretary Clinton\u27s campaign

    Fight Fire with Fire: Hacktivists' Take on Social Media Misinformation

    Full text link
    In this study, we interviewed 22 prominent hacktivists to learn their take on the increased proliferation of misinformation on social media. We found that none of them welcomes the nefarious appropriation of trolling and memes for the purpose of political (counter)argumentation and dissemination of propaganda. True to the original hacker ethos, misinformation is seen as a threat to the democratic vision of the Internet, and as such, it must be confronted on the face with tried hacktivists' methods like deplatforming the "misinformers" and doxing or leaking data about their funding and recruitment. The majority of the hacktivists also recommended interventions for raising misinformation literacy in addition to targeted hacking campaigns. We discuss the implications of these findings relative to the emergent recasting of hacktivism in defense of a constructive and factual social media discourse

    Understanding the Use of Fauxtography on Social Media

    Get PDF
    Despite the influence that image-based communication has on online discourse, the role played by images in disinformation is still not well understood. In this paper, we present the first large-scale study of fauxtography, analyzing the use of manipulated or misleading images in news discussion on online communities. First, we develop a computational pipeline geared to detect fauxtography, and identify over 61k instances of fauxtography discussed on Twitter, 4chan, and Reddit. Then, we study how posting fauxtography affects engagement of posts on social media, finding that posts containing it receive more interactions in the form of re-shares, likes, and comments. Finally, we show that fauxtography images are often turned into memes by Web communities. Our findings show that effective mitigation against disinformation need to take images into account, and highlight a number of challenges in dealing with image-based disinformation
    corecore