420 research outputs found

    Trends in Detection and Characterization of Propaganda Bots

    Get PDF
    Since the revelations of interference in the 2016 US Presidential elections, the UK’s Brexit referendum, the Catalan independence vote in 2017 and numerous other major political discussions by malicious online actors and propaganda bots, there has been increasing interest in understanding how to detect and characterize such threats.We focus on some of the recent research in algorithms for detection of propaganda botnets and metrics by which their impact can be measure

    False News On Social Media: A Data-Driven Survey

    Full text link
    In the past few years, the research community has dedicated growing interest to the issue of false news circulating on social networks. The widespread attention on detecting and characterizing false news has been motivated by considerable backlashes of this threat against the real world. As a matter of fact, social media platforms exhibit peculiar characteristics, with respect to traditional news outlets, which have been particularly favorable to the proliferation of deceptive information. They also present unique challenges for all kind of potential interventions on the subject. As this issue becomes of global concern, it is also gaining more attention in academia. The aim of this survey is to offer a comprehensive study on the recent advances in terms of detection, characterization and mitigation of false news that propagate on social media, as well as the challenges and the open questions that await future research on the field. We use a data-driven approach, focusing on a classification of the features that are used in each study to characterize false information and on the datasets used for instructing classification methods. At the end of the survey, we highlight emerging approaches that look most promising for addressing false news

    The role of bot squads in the political propaganda on Twitter

    Get PDF
    Social Media are nowadays the privileged channel for information spreading and news checking. Unexpectedly for most of the users, automated accounts, also known as social bots, contribute more and more to this process of news spreading. Using Twitter as a benchmark, we consider the traffic exchanged, over one month of observation, on a specific topic, namely the migration flux from Northern Africa to Italy. We measure the significant traffic of tweets only, by implementing an entropy-based null model that discounts the activity of users and the virality of tweets. Results show that social bots play a central role in the exchange of significant content. Indeed, not only the strongest hubs have a number of bots among their followers higher than expected, but furthermore a group of them, that can be assigned to the same political tendency, share a common set of bots as followers. The retwitting activity of such automated accounts amplifies the presence on the platform of the hubs' messages.Comment: Under Submissio

    Analyzing the Digital Traces of Political Manipulation: The 2016 Russian Interference Twitter Campaign

    Full text link
    Until recently, social media was seen to promote democratic discourse on social and political issues. However, this powerful communication platform has come under scrutiny for allowing hostile actors to exploit online discussions in an attempt to manipulate public opinion. A case in point is the ongoing U.S. Congress' investigation of Russian interference in the 2016 U.S. election campaign, with Russia accused of using trolls (malicious accounts created to manipulate) and bots to spread misinformation and politically biased information. In this study, we explore the effects of this manipulation campaign, taking a closer look at users who re-shared the posts produced on Twitter by the Russian troll accounts publicly disclosed by U.S. Congress investigation. We collected a dataset with over 43 million election-related posts shared on Twitter between September 16 and October 21, 2016, by about 5.7 million distinct users. This dataset included accounts associated with the identified Russian trolls. We use label propagation to infer the ideology of all users based on the news sources they shared. This method enables us to classify a large number of users as liberal or conservative with precision and recall above 90%. Conservatives retweeted Russian trolls about 31 times more often than liberals and produced 36x more tweets. Additionally, most retweets of troll content originated from two Southern states: Tennessee and Texas. Using state-of-the-art bot detection techniques, we estimated that about 4.9% and 6.2% of liberal and conservative users respectively were bots. Text analysis on the content shared by trolls reveals that they had a mostly conservative, pro-Trump agenda. Although an ideologically broad swath of Twitter users was exposed to Russian Trolls in the period leading up to the 2016 U.S. Presidential election, it was mainly conservatives who helped amplify their message

    A Comparative Analysis of Facebook and Twitter Bots

    Get PDF
    The increasing level of sophistication in the field of machine learning and artificial intelligence has engendered the creation of automated programs called \u27bots\u27. Bots are created for varied reasons, and it has become imperative to evaluate the impact of bots on the social media ecosystem and consequently on our daily lives. However, despite the ubiquity of bots, very little research has been conducted to compare their trends and impacts on the social media ecosystem. To address this gap in bot research, we perform a comparative analysis of Facebook and Twitter bots, in terms of their popularity and impact on the society. This paper sets the foundation to allow for subsequent, more detailed studies into this subject area. Analyzing trends of these emerging technologies can provide insight into identifying their importance and roles in our everyday life. We provide a brief background of the subject, such as types of social bots and their utility, bot detection techniques, and the impact they have had on society. We then utilize the IBM Watson cognitive search and content analytics engine to examine the public perception of these bots. We also use the Google query volumes to investigate the trends of search terms related to Facebook and Twitter bots. Our findings suggest that there is a slightly higher public acceptance of Facebook bots as compared to Twitter bots. Furthermore, the utilization of bots on Online Social Networks (OSNs) is on the rise. Originally, bots were developed as a tool for driving user engagements on social media platforms. However, today bots are increasingly being used to convey mis/disinformation and political propaganda

    The evolution of computational propaganda: Trends, threats, and implications now and in the future

    Get PDF
    Computational propaganda involves the use of selected narratives, social networks, and complex algorithms in order to develop and conduct influence operations (Woolley and Howard, 2017). In recent years the use of computational propaganda as an arm of cyberwarfare has increased in frequency. I aim to explore this topic to further understand the underlying forces behind the implementation of this tactic and then conduct a futures analysis to best determine how this topic will change over time. Additionally, I hope to gain insights on the implications of the current and potential future trends that computational propaganda has. My preliminary assessment shows that developments in technology, as well as a desire for improved narrative development will continue to lead to a more personalized narrative. This improved narrative will be more effective at influencing individuals and will ultimately support an organizations’ strategic goals. One aspect of this analysis is to gain knowledge on the evolution of the cyber domain, including electronic propaganda. Another is to better understand the complexity between the pairing of psychological operations with the technical side of this topic, as well as the past effects that cyber propaganda campaigns have had. Through this research, I hope to gain a stronger understanding of the future of computational propaganda and how those in intelligence analysis positions can best discern information that is collected. The overall goal of this research is to better understand this facet of the cyber domain. As traditional, boots on the ground, warfare techniques become less effective and more costly, alternative methods of warfare will continue to be developed and conducted. Computational propaganda is one of the branches of the cyber domain, falling under information warfare. I aim to identify an authoritative assessment on the plausible future of computational propaganda, as well as identify overall trends, in order to ensure resources are allocated to improving the defensive operations that prove to be adversarial to the United States and allies to the United States. During data collection, I used academic and credible news sources, think tanks, government reports, as well as reports by credible organizations conducting research on the topic. I also used graphics and diagrams to better understand the technical process that is involved. Additionally, I used the information collected in a previous report that I completed on a “Futures Analysis of Russian Cyber Influence in the United State Political System.” I was able to gain a lot of insight from this paper especially because my team and I worked with a sponsor for the project that provided extremely valuable information regarding influence campaigns and echo chambers
    corecore