538 research outputs found

    Analyzing the Digital Traces of Political Manipulation: The 2016 Russian Interference Twitter Campaign

    Full text link
    Until recently, social media was seen to promote democratic discourse on social and political issues. However, this powerful communication platform has come under scrutiny for allowing hostile actors to exploit online discussions in an attempt to manipulate public opinion. A case in point is the ongoing U.S. Congress' investigation of Russian interference in the 2016 U.S. election campaign, with Russia accused of using trolls (malicious accounts created to manipulate) and bots to spread misinformation and politically biased information. In this study, we explore the effects of this manipulation campaign, taking a closer look at users who re-shared the posts produced on Twitter by the Russian troll accounts publicly disclosed by U.S. Congress investigation. We collected a dataset with over 43 million election-related posts shared on Twitter between September 16 and October 21, 2016, by about 5.7 million distinct users. This dataset included accounts associated with the identified Russian trolls. We use label propagation to infer the ideology of all users based on the news sources they shared. This method enables us to classify a large number of users as liberal or conservative with precision and recall above 90%. Conservatives retweeted Russian trolls about 31 times more often than liberals and produced 36x more tweets. Additionally, most retweets of troll content originated from two Southern states: Tennessee and Texas. Using state-of-the-art bot detection techniques, we estimated that about 4.9% and 6.2% of liberal and conservative users respectively were bots. Text analysis on the content shared by trolls reveals that they had a mostly conservative, pro-Trump agenda. Although an ideologically broad swath of Twitter users was exposed to Russian Trolls in the period leading up to the 2016 U.S. Presidential election, it was mainly conservatives who helped amplify their message

    Who let the trolls out? Towards understanding state-sponsored trolls

    Get PDF
    Recent evidence has emerged linking coordinated campaigns by state-sponsored actors to manipulate public opinion on the Web. Campaigns revolving around major political events are enacted via mission-focused ?trolls." While trolls are involved in spreading disinformation on social media, there is little understanding of how they operate, what type of content they disseminate, how their strategies evolve over time, and how they influence the Web's in- formation ecosystem. In this paper, we begin to address this gap by analyzing 10M posts by 5.5K Twitter and Reddit users identified as Russian and Iranian state-sponsored trolls. We compare the behavior of each group of state-sponsored trolls with a focus on how their strategies change over time, the different campaigns they embark on, and differences between the trolls operated by Russia and Iran. Among other things, we find: 1) that Russian trolls were pro-Trump while Iranian trolls were anti-Trump; 2) evidence that campaigns undertaken by such actors are influenced by real-world events; and 3) that the behavior of such actors is not consistent over time, hence detection is not straightforward. Using Hawkes Processes, we quantify the influence these accounts have on pushing URLs on four platforms: Twitter, Reddit, 4chan's Politically Incorrect board (/pol/), and Gab. In general, Russian trolls were more influential and efficient in pushing URLs to all the other platforms with the exception of /pol/ where Iranians were more influential. Finally, we release our source code to ensure the reproducibility of our results and to encourage other researchers to work on understanding other emerging kinds of state-sponsored troll accounts on Twitter.https://arxiv.org/pdf/1811.03130.pdfAccepted manuscrip

    Twitter and social bots : an analysis of the 2021 Canadian election

    Full text link
    Les mĂ©dias sociaux sont dĂ©sormais des outils de communication incontournables, notamment lors de campagnes Ă©lectorales. La prĂ©valence de l’utilisation de plateformes de communication en ligne suscite nĂ©anmoins des inquiĂ©tudes au sein des dĂ©mocraties occidentales quant aux risques de manipulation des Ă©lecteurs, notamment par le biais de robots sociaux. Les robots sociaux sont des comptes automatisĂ©s qui peuvent ĂȘtre utilisĂ©s pour produire ou amplifier le contenu en ligne tout en se faisant passer pour de rĂ©els utilisateurs. Certaines Ă©tudes, principalement axĂ©es sur le cas des États-Unis, ont analysĂ© la propagation de contenus de dĂ©sinformation par les robots sociaux en pĂ©riode Ă©lectorale, alors que d’autres ont Ă©galement examinĂ© le rĂŽle de l’affiliation partisane sur les comportements et les tactiques favorisĂ©es par les robots sociaux. Toutefois, la question Ă  savoir si l'orientation partisane des robots sociaux a un impact sur la quantitĂ© de dĂ©sinformation politique qu’ils propagent demeure sans rĂ©ponse. Par consĂ©quent, l’objectif principal de ce travail de recherche est de dĂ©terminer si des diffĂ©rences partisanes peuvent ĂȘtre observĂ©es dans (i) le nombre de robots sociaux actifs pendant la campagne Ă©lectorale canadienne de 2021, (ii) leurs interactions avec les comptes rĂ©els, et (iii) la quantitĂ© de contenu de dĂ©sinformation qu’ils ont propagĂ©. Afin d’atteindre cet objectif de recherche, ce mĂ©moire de maĂźtrise s’appuie sur un ensemble de donnĂ©es Twitter de plus de 11,3 millions de tweets en anglais provenant d’environ 1,1 million d'utilisateurs distincts, ainsi que sur divers modĂšles pour distinguer les comptes de robots sociaux des comptes humains, dĂ©terminer l’orientation partisane des utilisateurs et dĂ©tecter le contenu de dĂ©sinformation politique vĂ©hiculĂ©. Les rĂ©sultats de ces mĂ©thodes distinctes indiquent des diffĂ©rences limitĂ©es dans le comportement des robots sociaux lors des derniĂšres Ă©lections fĂ©dĂ©rales. Il a tout de mĂȘme Ă©tĂ© possible d'observer que les robots sociaux de tendance conservatrice Ă©taient plus nombreux que leurs homologues de tendance libĂ©rale, mais que les robots sociaux d’orientation libĂ©rale Ă©taient ceux qui ont interagi le plus avec les comptes authentiques par le biais de retweets et de rĂ©ponses directes, et qui ont propagĂ© le plus de contenu de dĂ©sinformation.Social media have now become essential communication tools, including within the context of electoral campaigns. However, the prevalence of online communication platforms has raised concerns in Western democracies about the risks of voter manipulation, particularly through social bot accounts. Social bots are automated computer algorithms which can be used to produce or amplify online content while posing as authentic users. Some studies, mostly focused on the case of the United States, analyzed the propagation of disinformation content by social bots during electoral periods, while others have also examined the role of partisanship on social bots’ behaviors and activities. However, the question of whether social bots’ partisan-leaning impacts the amount of political disinformation content they generate online remains unanswered. Therefore, the main goal of this study is to determine whether partisan differences could be observed in (i) the number of active social bots during the 2021 Canadian election campaign, (ii) their interactions with humans, and (iii) the amount of disinformation content they propagated. In order to reach this research objective, this master’s thesis relies on an original Twitter dataset of more than 11.3 million English tweets from roughly 1.1 million distinct users, as well as diverse models to distinguish between social bot and human accounts, determine the partisan-leaning of users, and detect political disinformation content. Based on these distinct methods, the results indicate limited differences in the behavior of social bots in the 2021 federal election. It was however possible to observe that conservative-leaning social bots were more numerous than their liberal-leaning counterparts, but liberal-leaning accounts were those who interacted more with authentic accounts through retweets and replies and shared the most disinformation content

    Are Cyber Operations Having an Impact on State Electoral Processes?

    Full text link
    Cyber-attacks have become common occurrences which have an impact on all aspects of life ranging from business transactions to personal communications. Alarmingly, coordinated cyber-attacks are increasingly targeting politicians and their associates, political campaigns, political organizations and the broader public with political messaging. Given the novelty of these new forms of attacks, little is known of their potential impact. This thesis argues that states, state-directed actors, or non-state actors are disrupting, altering or influencing the electoral process in democratic states through coordinated cyber operations. It further argues that the purpose is to increase hyper-partisanship and erode the legitimacy of democratically-elected leaders. A quantitative study analyzing the data from a test group of consolidated democracies which had experienced these types of cyber operations displayed declining confidence in both their national governments and the honesty of their elections. By investigating the most prominent and verifiable cyber-attacks against state election processes, a connection between the attacks and Russia’s state intelligence services became apparent. Further research revealed Russian intelligence agencies’ historic use of covert ‘active measures’ and their current efforts to incorporate cyber operations within those measures, thus increasing active measures’ versatility and efficiency. Historic and geopolitical insight provided by an ex-official from a former Soviet Republic contextualized how these new cyber operations could be used to advance Russian geopolitical objectives

    Social Media Monitoring During Elections: Cases and Best Practice to Inform Electoral Observation Missions

    Get PDF
    Concern over online interference in elections is now widespread—from the fallout of the Cambridge Analytica scandal to the pernicious effects messaging apps have had in elections in Kenya or Brazil. Yet regulatory and monitoring efforts have lagged behind in addressing the challenges of how public opinion can be manipulated online, and its impact on elections. The phenomenon of online electoral interference is global. It affects established democracies, countries in transition, and places where freedom of expression and access to information are tightly controlled.But fundamental questions of what should be legal and illegal in digital political communication have yet to be answered in order to extend the rule of electoral law from the offline to the online. Answering these questions would help determine the right scope for online election observation, too. This scoping report explains why social media is one of the elements of a democratic, rule of law–based state that observer groups should monitor. It aggregates experience from diverse civil society and nongovernmental initiatives that are innovating in this field, and sets out questions to guide the development of new mandates for election observers. The internet and new digital tools are profoundly reshaping political communication and campaigning. But an independent and authoritative assessment of the impact of these effects is wanting. Election observation organizations need to adapt their mandate and methodology in order to remain relevant and protect the integrity of democratic processes

    Identifying Dis/Misinformation on Social Media: A Policy Report for the Diplomacy Lab Strategies for Identifying Mis/Disinformation Project

    Get PDF
    Dis/misinformation was a major concern in the 2016 U.S. presidential election and has only worsened in recent years. Even though domestic actors often spread dis/misinformation, actors abroad can use it to spread confusion and push their agenda to the detriment of American citizens. Even though this report focuses on actors outside the United States, the methods they use are universal and can be adapted to work against domestic agents. A solid understanding of these methods is the first step in combating foreign dis/misinformation campaigns and creating a new information literacy paradigm. This report highlights the primary mechanisms of dis/misinformation: multimedia manipulation, bots, astroturfing, and trolling. These forms of dis/misinformation were selected after thorough research about common pathways dis/misinformation are spread online. Multimedia manipulation details image, video, and audio dis/misinformation in the form of deepfakes, memes, and out-of-context images. Bots are automated social media accounts that are not managed by humans and often contribute to dis/misinformation campaigns. Astroturfing and trolls use deception to sway media users to join false grassroots campaigns and utilize emotionally charged posts to provoke a response from users. This policy report also defines case studies of disinformation in China, Russia, and Iran, outlining common patterns of dis/misinformation specific to these countries. These patterns will allow for more accurate and quick identification of dis/misinformation from the outlined countries by State Department Watch Officers. Recommendations have also been provided for each type of disinformation and include a list of what individuals should look for and how to make sure that the information they receive is accurate and from a reputable source. The addendum at the end of the paper lists all of the recommendations in one place so that individuals do not have to search the paper for the recommendation they are looking for. This report intends to aid State Department Watch Officers as they work to identify foreign developments accurately. Still, researchers may find this information useful in anticipating future developments in foreign dis/misinformation campaigns
    • 

    corecore