1,374 research outputs found

    Social Media Monitoring During Elections: Cases and Best Practice to Inform Electoral Observation Missions

    Get PDF
    Concern over online interference in elections is now widespread—from the fallout of the Cambridge Analytica scandal to the pernicious effects messaging apps have had in elections in Kenya or Brazil. Yet regulatory and monitoring efforts have lagged behind in addressing the challenges of how public opinion can be manipulated online, and its impact on elections. The phenomenon of online electoral interference is global. It affects established democracies, countries in transition, and places where freedom of expression and access to information are tightly controlled.But fundamental questions of what should be legal and illegal in digital political communication have yet to be answered in order to extend the rule of electoral law from the offline to the online. Answering these questions would help determine the right scope for online election observation, too. This scoping report explains why social media is one of the elements of a democratic, rule of law–based state that observer groups should monitor. It aggregates experience from diverse civil society and nongovernmental initiatives that are innovating in this field, and sets out questions to guide the development of new mandates for election observers. The internet and new digital tools are profoundly reshaping political communication and campaigning. But an independent and authoritative assessment of the impact of these effects is wanting. Election observation organizations need to adapt their mandate and methodology in order to remain relevant and protect the integrity of democratic processes

    Supranational or Compartmental: Applying the Question of European Union Identity to the Topic of Disinformation

    Get PDF
    The proliferation of disinformation is not a new phenomenon. However, the increasingly interconnected nature of the global environment means that disinformation is more effective now than ever before. Western societies are simultaneously experiencing a growing political stratification and third-party intervention in their respective democratic processes and institutions. State actors have utilized social media, hybrid warfare tactics, and automated disinformation tools to exacerbate divisions in society. Therefore, it is crucial that such societies develop sufficient capabilities to proportionately counter third-party interventionism. This paper aims to examine the relative counterdisinformation measures taken by the European Union (EU) in order to draw comparisons to those measures taken by individual EU member states. Thus, we are applying the classic EU debate of supranationalism versus state sovereignty to the topic of disinformation. In doing so, we hope to assess whether a supranational, EU-based strategy is more effective than a compartmental, member state-based strategy to counter disinformation. We first examine the body of EU action, followed by an examination of Baltic, Swedish, and German actions with the hope of ascertaining which pathway facilitates a more effective response

    Detecting and analyzing bots on Finnish political twitter

    Get PDF
    This master’s thesis develops a machine learning model for detecting Twitter bots and applying the model to assess if bots were used to influence the 2019 Finnish parliamentary election. The aim of the thesis is to contribute to the growing information systems science literature on the use of social media and information systems to influence voters as well as to increase the general awareness in Finland of the effects of bots on Twitter. The thesis relies primarily on quantitative analysis of a dataset consisting of 550,000 unique Twitter accounts. The data was collected from Twitter during March 2019. The accounts in the dataset belong to humans and bots that were following 14 prominent Finnish politicians on Twitter. To determine which accounts are bots and to assess the feasibility of a new method for Twitter bot detection, a machine learning model that utilizes metadata-based features for classifying Twitter accounts as bots or humans is developed and tested on the dataset. The findings of this thesis indicate that a metadata-based approach is suitable for detecting bots and that there are several large botnets in the Finnish Twittersphere. Over 30% of the 550,000 accounts are labeled as bots by the model, which implies that the prevalence of bots is much higher than previously suggested by Twitter’s official estimates. Furthermore, a majority of the accounts seem inactive and either no longer being used or dormant and waiting for activation. The purpose of most of the bot accounts is obscure, and it is not certain how many of them are following and inflating the politicians’ popularity on purpose. Although the bots clearly increase the visibility of certain politicians, the effects of the bots on Finnish political Twitter are deemed negligible

    Are Cyber Operations Having an Impact on State Electoral Processes?

    Full text link
    Cyber-attacks have become common occurrences which have an impact on all aspects of life ranging from business transactions to personal communications. Alarmingly, coordinated cyber-attacks are increasingly targeting politicians and their associates, political campaigns, political organizations and the broader public with political messaging. Given the novelty of these new forms of attacks, little is known of their potential impact. This thesis argues that states, state-directed actors, or non-state actors are disrupting, altering or influencing the electoral process in democratic states through coordinated cyber operations. It further argues that the purpose is to increase hyper-partisanship and erode the legitimacy of democratically-elected leaders. A quantitative study analyzing the data from a test group of consolidated democracies which had experienced these types of cyber operations displayed declining confidence in both their national governments and the honesty of their elections. By investigating the most prominent and verifiable cyber-attacks against state election processes, a connection between the attacks and Russia’s state intelligence services became apparent. Further research revealed Russian intelligence agencies’ historic use of covert ‘active measures’ and their current efforts to incorporate cyber operations within those measures, thus increasing active measures’ versatility and efficiency. Historic and geopolitical insight provided by an ex-official from a former Soviet Republic contextualized how these new cyber operations could be used to advance Russian geopolitical objectives

    Detecting Political Bots on Twitter during the 2019 Finnish Parliamentary Election

    Get PDF
    In recent years, the political discussion has been dominated by the impact of bots used for manipulating public opinion. A number of sources have reported a widespread presence of political bots in social media sites such as Twitter. Compared to other countries, the influence of bots in Finnish politics have received little attention from media and researchers. This study aims to investigate the influence of bots on Finnish political Twitter, based on a dataset consisting of the accounts following major Finnish politicians before the Finnish parliamentary election of 2019. To identify the bots, we extend the existing models with the use of user-level metadata and state-of-art classification models. The results support our model as a suitable instrument for detecting Twitter bots. We found that, albeit there is a huge amount of bot accounts following major Finnish politicians, it is unlikely resulting from foreign entities’ attempt to influence the Finnish parliamentary election

    GOTCHA BOT DETECTION: CONTEXT, TIME AND PLACE MATTERS

    Get PDF
    Bot detection is increasingly relevant considering that automated accounts play a disproportionate role in spreading disinformation, controlling social interactions, influencing social media algorithms and manufacturing public opinion online for different purposes. Definition, description and detection of automated manipulation techniques have proved a challenge as technology quickly advances in reach and sophistication. Considering the high contextual character of social science research, the employment of off-the-shelf detection tools raises questions regarding the applicability of machine learning systems in different cases, times and places. Thus, our purpose is to discuss the role of computational methods focusing on understanding the limitations and potential of machine learning systems to identify bots on social media platforms. To address it, we analyze the performance of Botometer, a widely adopted detection tool, in a specific domain (Amazon Forest Fires) and language (Portuguese) and propose a supervised machine learning classifier, called Gotcha, based on Botometer's framework and trained for this specific dataset. We also question how our classifier behaves and evolves over time and perform tests to evaluate the generalization capabilities of the retrained model. Our results demonstrated that supervised methods do not perform well with datasets that present features on which the system was not directly trained, such as language and topic. Hence, our study shows that a successful computational model does not always guarantee reliable results, applicable to a specific real case. Our findings indicate the need for social scientists to confirm the reliability of different tools created and tested only through the prism of computational studies before applying them to empirical social science research

    The European Union versus External Disinformation Campaigns in the Midst of Information Warfare: Ready for the Battle? College of Europe EU Diplomacy Paper 01/2019

    Get PDF
    As a result of increased globalisation and digitalisation, new security challenges emerge such as the rise of online disinformation which undermines democracy and people’s trust in mainstream media and public authorities. The 2016 United States presidential elections, the Brexit referendum in the United Kingdom and the 2017 French presidential elections have all been disturbed by external interference coming from Russia, including massive disinformation campaigns which were disseminated on social media to influence citizens’ opinion. This paper studies the European Union’s (EU) strategy to counter external disinformation campaigns in cyberspace, i.e. the campaigns that are diffused online by foreign actors, such as Russia, within the EU’s territory. To what extent is the EU strategically prepared to counter external disinformation campaigns in cyberspace? The EU has adopted a defensive strategy to deal with disinformation. It has delivered several strategic documents, including an Action Plan in December 2018, that provides a promising basis for action. The work done by the East StratCom Task Force, which detects and debunks Russian narratives, is a strong asset for the EU. The major online platforms are currently trying to implement a Code of Practice that the European Commission has set up with the aim of curbing disinformation spreading on social networks. Having a long-term perspective in mind, the EU rightly implements measures to enhance societal resilience and improve media literacy among its citizens. However, the financial resources dedicated to counter disinformation are not commensurate with the threat it represents. Furthermore, the EU’s approach is not focusing enough on artificial intelligence tools that can significantly influence how disinformation is carried out and disseminated but can, on the other hand, also help fact-checking activities. Hence, the EU is not entirely prepared to counter external disinformation campaigns in cyberspace. Moreover, disinformation should be looked at in the wider framework of hybrid warfare and should therefore be considered as a cybersecurity matter

    Hybrid Warfare

    Get PDF
    This book is available as open access through the Bloomsbury Open Access programme and is available on www.bloomsburycollections.com. Hybrid Warfare refers to a military strategy that blends conventional warfare, so-called ‘irregular warfare’ and cyber-attacks with other influencing methods, such as fake news, diplomacy and foreign political intervention. As Hybrid Warfare becomes increasingly commonplace, there is an imminent need for research bringing attention to how these challenges can be addressed in order to develop a comprehensive approach towards Hybrid Threats and Hybrid Warfare. This volume supports the development of such an approach by bringing together practitioners and scholarly perspectives on the topic and by covering the threats themselves, as well as the tools and means to counter them, together with a number of real-world case studies. The book covers numerous aspects of current Hybrid Warfare discourses including a discussion of the perspectives of key western actors such as NATO, the US and the EU; an analysis of Russia and China’s Hybrid Warfare capabilities; and the growing threat of cyberwarfare. A range of global case studies – featuring specific examples from the Baltics, Taiwan, Ukraine, Iran and Catalonia – are drawn upon to demonstrate the employment of Hybrid Warfare tactics and how they have been countered in practice. Finally, the editors propose a new method through which to understand the dynamics of Hybrid Threats, Warfare and their countermeasures, termed the ‘Hybridity Blizzard Model’. With a focus on practitioner insight and practicable International Relations theory, this volume is an essential guide to identifying, analysing and countering Hybrid Threats and Warfare

    Bot, or not? Comparing three methods for detecting social bots in five political discourses

    Get PDF
    Social bots – partially or fully automated accounts on social media platforms – have not only been widely discussed, but have also entered political, media and research agendas. However, bot detection is not an exact science. Quantitative estimates of bot prevalence vary considerably and comparative research is rare. We show that findings on the prevalence and activity of bots on Twitter depend strongly on the methods used to identify automated accounts. We search for bots in political discourses on Twitter, using three different bot detection methods: Botometer, Tweetbotornot and “heavy automation”. We drew a sample of 122,884 unique user Twitter accounts that had produced 263,821 tweets contributing to five political discourses in five Western democracies. While all three bot detection methods classified accounts as bots in all our cases, the comparison shows that the three approaches produce very different results. We discuss why neither manual validation nor triangulation resolves the basic problems, and conclude that social scientists studying the influence of social bots on (political) communication and discourse dynamics should be careful with easy-to-use methods, and consider interdisciplinary research
    • 

    corecore