98 research outputs found

    The role of bot squads in the political propaganda on Twitter

    Get PDF
    Social Media are nowadays the privileged channel for information spreading and news checking. Unexpectedly for most of the users, automated accounts, also known as social bots, contribute more and more to this process of news spreading. Using Twitter as a benchmark, we consider the traffic exchanged, over one month of observation, on a specific topic, namely the migration flux from Northern Africa to Italy. We measure the significant traffic of tweets only, by implementing an entropy-based null model that discounts the activity of users and the virality of tweets. Results show that social bots play a central role in the exchange of significant content. Indeed, not only the strongest hubs have a number of bots among their followers higher than expected, but furthermore a group of them, that can be assigned to the same political tendency, share a common set of bots as followers. The retwitting activity of such automated accounts amplifies the presence on the platform of the hubs' messages.Comment: Under Submissio

    The role of bot squads in the political propaganda on Twitter

    Get PDF
    Nowadays, Social Media are a privileged channel for news spreading, information exchange, and fact checking. Unexpectedly for many users, automated accounts, known as social bots, contribute more and more to this process of information diffusion. Using Twitter as a benchmark, we consider the traffic exchanged, over one month of observation, on the migration flux from Northern Africa to Italy. We measure the significant traffic of tweets only, by implementing an entropy-based null model that discounts the activity of users and the virality of tweets. Results show that social bots play a central role in the exchange of significant content. Indeed, not only the strongest hubs have a number of bots among their followers higher than expected, but furthermore a group of them, that can be assigned to the same political tendency, share a common set of bots as followers. The retweeting activity of such automated accounts amplifies the hubs’ messages

    An Exploratory Study of COVID-19 Misinformation on Twitter

    Get PDF
    During the COVID-19 pandemic, social media has become a home ground for misinformation. To tackle this infodemic, scientific oversight, as well as a better understanding by practitioners in crisis management, is needed. We have conducted an exploratory study into the propagation, authors and content of misinformation on Twitter around the topic of COVID-19 in order to gain early insights. We have collected all tweets mentioned in the verdicts of fact-checked claims related to COVID-19 by over 92 professional fact-checking organisations between January and mid-July 2020 and share this corpus with the community. This resulted in 1 500 tweets relating to 1 274 false and 276 partially false claims, respectively. Exploratory analysis of author accounts revealed that the verified twitter handle(including Organisation/celebrity) are also involved in either creating (new tweets) or spreading (retweet) the misinformation. Additionally, we found that false claims propagate faster than partially false claims. Compare to a background corpus of COVID-19 tweets, tweets with misinformation are more often concerned with discrediting other information on social media. Authors use less tentative language and appear to be more driven by concerns of potential harm to others. Our results enable us to suggest gaps in the current scientific coverage of the topic as well as propose actions for authorities and social media users to counter misinformation.Comment: 20 pages, nine figures, four tables. Submitted for peer review, revision

    A model for the Twitter sentiment curve

    Get PDF
    Twitter is among the most used online platforms for the political communications, due to the concision of its messages (which is particularly suitable for political slogans) and the quick diffusion of messages. Especially when the argument stimulate the emotionality of users, the content on Twitter is shared with extreme speed and thus studying the tweet sentiment if of utmost importance to predict the evolution of the discussions and the register of the relative narratives. In this article, we present a model able to reproduce the dynamics of the sentiments of tweets related to specific topics and periods and to provide a prediction of the sentiment of the future posts based on the observed past. The model is a recent variant of the P\'olya urn, introduced and studied in arXiv:1906.10951 and arXiv:2010.06373, which is characterized by a "local" reinforcement, i.e. a reinforcement mechanism mainly based on the most recent observations, and by a random persistent fluctuation of the predictive mean. In particular, this latter feature is capable of capturing the trend fluctuations in the sentiment curve. While the proposed model is extremely general and may be also employed in other contexts, it has been tested on several Twitter data sets and demonstrated greater performances compared to the standard P\'olya urn model. Moreover, the different performances on different data sets highlight different emotional sensitivities respect to a public event.Comment: 19 pages, 12 figure

    Assessing the Role of Social Bots During the COVID-19 Pandemic: Infodemic, Disagreement, and Criticism

    Get PDF
    Background: Social media has changed the way we live and communicate, as well as offering unprecedented opportunities to improve many aspects of our lives, including health promotion and disease prevention. However, there is also a darker side to social media that is not always as evident as its possible benefits. In fact, social media has also opened the door to new social and health risks that are linked to health misinformation. Objective: This study aimed to study the role of social media bots during the COVID-19 outbreak. Methods: The Twitter streaming API was used to collect tweets regarding COVID-19 during the early stages of the outbreak. The Botometer tool was then used to obtain the likelihood of whether each account is a bot or not. Bot classification and topic-modeling techniques were used to interpret the Twitter conversation. Finally, the sentiment associated with the tweets was compared depending on the source of the tweet. Results: Regarding the conversation topics, there were notable differences between the different accounts. The content of nonbot accounts was associated with the evolution of the pandemic, support, and advice. On the other hand, in the case of self-declared bots, the content consisted mainly of news, such as the existence of diagnostic tests, the evolution of the pandemic, and scientific findings. Finally, in the case of bots, the content was mostly political. Above all, there was a general overriding tone of criticism and disagreement. In relation to the sentiment analysis, the main differences were associated with the tone of the conversation. In the case of self-declared bots, this tended to be neutral, whereas the conversation of normal users scored positively. In contrast, bots tended to score negatively. Conclusions: By classifying the accounts according to their likelihood of being bots and performing topic modeling, we were able to segment the Twitter conversation regarding COVID-19. Bot accounts tended to criticize the measures imposed to curb the pandemic, express disagreement with politicians, or question the veracity of the information shared on social media

    Cross-Domain Learning for Classifying Propaganda in Online Contents

    Get PDF
    As news and social media exhibit an increasing amount of manipulative polarized content, detecting such propaganda has received attention as a new task for content analysis. Prior work has focused on supervised learning with training data from the same domain. However, as propaganda can be subtle and keeps evolving, manual identification and proper labeling are very demanding. As a consequence, training data is a major bottleneck. In this paper, we tackle this bottleneck and present an approach to leverage cross-domain learning, based on labeled documents and sentences from news and tweets, as well as political speeches with a clear difference in their degrees of being propagandistic. We devise informative features and build various classifiers for propaganda labeling, using cross-domain learning. Our experiments demonstrate the usefulness of this approach, and identify difficulties and limitations in various configurations of sources and targets for the transfer step. We further analyze the influence of various features, and characterize salient indicators of propaganda

    Flow of online misinformation during the peak of the COVID-19 pandemic in Italy

    Get PDF
    The COVID-19 pandemic has impacted on every human activity and, because of the urgency of finding the proper responses to such an unprecedented emergency, it generated a diffused societal debate. The online version of this discussion was not exempted by the presence of d/misinformation campaigns, but differently from what already witnessed in other debates, the COVID-19 -- intentional or not -- flow of false information put at severe risk the public health, reducing the effectiveness of governments' countermeasures. In the present manuscript, we study the effective impact of misinformation in the Italian societal debate on Twitter during the pandemic, focusing on the various discursive communities. In order to extract the discursive communities, we focus on verified users, i.e. accounts whose identity is officially certified by Twitter. We thus infer the various discursive communities based on how verified users are perceived by standard ones: if two verified accounts are considered as similar by non unverified ones, we link them in the network of certified accounts. We first observe that, beside being a mostly scientific subject, the COVID-19 discussion show a clear division in what results to be different political groups. At this point, by using a commonly available fact-checking software (NewsGuard), we assess the reputation of the pieces of news exchanged. We filter the network of retweets (i.e. users re-broadcasting the same elementary piece of information, or tweet) from random noise and check the presence of messages displaying an url. The impact of misinformation posts reaches the 22.1% in the right and center-right wing community and its contribution is even stronger in absolute numbers, due to the activity of this group: 96% of all non reputable urls shared by political groups come from this community.Comment: 25 pages, 4 figures. The Abstract, the Introduction, the Results, the Conclusions and the Methods were substantially rewritten. The plot of the network have been changed, as well as table
    corecore