130 research outputs found

    Influence of augmented humans in online interactions during voting events

    Full text link
    The advent of the digital era provided a fertile ground for the development of virtual societies, complex systems influencing real-world dynamics. Understanding online human behavior and its relevance beyond the digital boundaries is still an open challenge. Here we show that online social interactions during a massive voting event can be used to build an accurate map of real-world political parties and electoral ranks. We provide evidence that information flow and collective attention are often driven by a special class of highly influential users, that we name "augmented humans", who exploit thousands of automated agents, also known as bots, for enhancing their online influence. We show that augmented humans generate deep information cascades, to the same extent of news media and other broadcasters, while they uniformly infiltrate across the full range of identified groups. Digital augmentation represents the cyber-physical counterpart of the human desire to acquire power within social systems.Comment: 11 page

    The role of bot squads in the political propaganda on Twitter

    Get PDF
    Social Media are nowadays the privileged channel for information spreading and news checking. Unexpectedly for most of the users, automated accounts, also known as social bots, contribute more and more to this process of news spreading. Using Twitter as a benchmark, we consider the traffic exchanged, over one month of observation, on a specific topic, namely the migration flux from Northern Africa to Italy. We measure the significant traffic of tweets only, by implementing an entropy-based null model that discounts the activity of users and the virality of tweets. Results show that social bots play a central role in the exchange of significant content. Indeed, not only the strongest hubs have a number of bots among their followers higher than expected, but furthermore a group of them, that can be assigned to the same political tendency, share a common set of bots as followers. The retwitting activity of such automated accounts amplifies the presence on the platform of the hubs' messages.Comment: Under Submissio

    STREAM-EVOLVING BOT DETECTION FRAMEWORK USING GRAPH-BASED AND FEATURE-BASED APPROACHES FOR IDENTIFYING SOCIAL BOTS ON TWITTER

    Get PDF
    This dissertation focuses on the problem of evolving social bots in online social networks, particularly Twitter. Such accounts spread misinformation and inflate social network content to mislead the masses. The main objective of this dissertation is to propose a stream-based evolving bot detection framework (SEBD), which was constructed using both graph- and feature-based models. It was built using Python, a real-time streaming engine (Apache Kafka version 3.2), and our pretrained model (bot multi-view graph attention network (Bot-MGAT)). The feature-based model was used to identify predictive features for bot detection and evaluate the SEBD predictions. The graph-based model was used to facilitate multiview graph attention networks (GATs) with fellowship links to build our framework for predicting account labels from streams. A probably approximately correct learning framework was applied to confirm the accuracy and confidence levels of SEBD.The results showed that the SEBD can effectively identify bots from streams and profile features are sufficient for detecting social bots. The pretrained Bot-MGAT model uses fellowship links to reveal hidden information that can aid in identifying bot accounts. The significant contributions of this study are the development of a stream based bot detection framework for detecting social bots based on a given hashtag and the proposal of a hybrid approach for feature selection to identify predictive features for identifying bot accounts. Our findings indicate that Twitter has a higher percentage of active bots than humans in hashtags. The results indicated that stream-based detection is more effective than offline detection by achieving accuracy score 96.9%. Finally, semi supervised learning (SSL) can solve the issue of labeled data in bot detection tasks

    A model for the Twitter sentiment curve

    Get PDF
    Twitter is among the most used online platforms for the political communications, due to the concision of its messages (which is particularly suitable for political slogans) and the quick diffusion of messages. Especially when the argument stimulate the emotionality of users, the content on Twitter is shared with extreme speed and thus studying the tweet sentiment if of utmost importance to predict the evolution of the discussions and the register of the relative narratives. In this article, we present a model able to reproduce the dynamics of the sentiments of tweets related to specific topics and periods and to provide a prediction of the sentiment of the future posts based on the observed past. The model is a recent variant of the P\'olya urn, introduced and studied in arXiv:1906.10951 and arXiv:2010.06373, which is characterized by a "local" reinforcement, i.e. a reinforcement mechanism mainly based on the most recent observations, and by a random persistent fluctuation of the predictive mean. In particular, this latter feature is capable of capturing the trend fluctuations in the sentiment curve. While the proposed model is extremely general and may be also employed in other contexts, it has been tested on several Twitter data sets and demonstrated greater performances compared to the standard P\'olya urn model. Moreover, the different performances on different data sets highlight different emotional sensitivities respect to a public event.Comment: 19 pages, 12 figure

    Who Falls for Online Political Manipulation?

    Full text link
    Social media, once hailed as a vehicle for democratization and the promotion of positive social change across the globe, are under attack for becoming a tool of political manipulation and spread of disinformation. A case in point is the alleged use of trolls by Russia to spread malicious content in Western elections. This paper examines the Russian interference campaign in the 2016 US presidential election on Twitter. Our aim is twofold: first, we test whether predicting users who spread trolls' content is feasible in order to gain insight on how to contain their influence in the future; second, we identify features that are most predictive of users who either intentionally or unintentionally play a vital role in spreading this malicious content. We collected a dataset with over 43 million elections-related posts shared on Twitter between September 16 and November 9, 2016, by about 5.7 million users. This dataset includes accounts associated with the Russian trolls identified by the US Congress. Proposed models are able to very accurately identify users who spread the trolls' content (average AUC score of 96%, using 10-fold validation). We show that political ideology, bot likelihood scores, and some activity-related account meta data are the most predictive features of whether a user spreads trolls' content or not

    Materiais e avaliação do nível de literacia para o reconhecimento dos bots sociais em contextos de desinformação política

    Get PDF
    The media ecosystem is constantly changing, transforming at a rhythm that educational institutions cannot keep up with. In this media transformation, artificial intelligence (AI) has been introduced, adapted to social networks for various purposes, including political ones. This work focuses on AI in bot format as an automated tool to publish content on Twitter and on the skills needed to identify them. Bots seek to imitate human behaviour to create a climate of concrete opinion, participate in political conversations and interact with real accounts in order to boycott them or increase their relevance. In order to know the level of media literacy of journalism students and increase their skills to identify this type of manipulation of the public sphere, a teaching intervention is designed at university level, applied to a sample of 55 students, consisting of a workshop to identify bots on Twitter. From the results of the workshop, evaluated through questionnaires, it is observed that the students have previous skills acquired informally to detect bots and that after the workshop the ability to identify these accounts slightly increases and, above all, they manifest a more conscious process when they face them. This paper presents the design of the workshop and its evaluation.El ecosistema de medios está en constante cambio, transformándose a un ritmo que las instituciones educativas no pueden seguir. En esta transformación mediática se ha introducido la inteligencia artificial (IA), adaptada a las redes sociales con diversos fines, también políticos. Este trabajo se centra en la IA en formato bot como herramienta automatizada para publicar contenido en Twitter y en las competencias necesarias para identificarlos. Los bots buscan imitar el comportamiento humano para crear un clima de opinión concreto, participar en conversaciones políticas e interaccionar con cuentas reales para boicotearlas o aumentar su relevancia. Con el objetivo de conocer el nivel de alfabetización mediática del alumnado de periodismo e incrementar sus competencias para identificar este tipo de manipulación de la esfera pública, se diseña una intervención docente en ámbito universitario, aplicada en una muestra de 55 alumnos, que consiste en un taller de identificación de bots en Twitter. A partir de los resultados del taller, evaluados mediante cuestionarios, se observa que el alumnado presenta habilidades previas adquiridas informalmente para detectar bots y que, tras el taller, aumenta ligeramente la capacidad de identificación de estas cuentas y, sobre todo, manifiestan un proceso más consciente cuando se enfrentan a ellas. En este trabajo se expone el diseño del taller y su evaluación.O ecossistema mediático está em constante mudança, transformando-se a um ritmo que as instituições educativas não conseguem igualar. Nessa transformação da mídia, foi introduzida a inteligência artificial (IA), adaptada às redes sociais para diversos fins, inclusive políticos. Este trabalho centra-se na IA em formato bot como uma ferramenta automatizada para publicar conteúdos no Twitter e nas habilidades necessárias para identificá-los. Os bots procuram imitar o comportamento humano para gerar um clima de opinião concreta, participar de conversas políticas e interagir com contas reais a fim de boicotá-las ou aumentar sua relevância. A fim de conhecer o nível de literacia mediática dos estudantes de jornalismo e aumentar as suas competências para identificar este tipo de manipulação da esfera pública, é desenvolvida uma intervenção pedagógica a nível universitário, aplicada a uma amostra de 55 estudantes, que consiste num seminário para identificar bots no Twitter. A partir dos resultados do workshop, avaliados através de questionários, observa-se que os alunos têm habilidades anteriores adquiridos informalmente para detectar bots e que após o seminário a capacidade de identificar essas contas aumenta ligeiramente e, acima de tudo, eles manifestam um processo mais consciente quando eles enfrentam-las. Este artigo apresenta o projeto do workshop e sua avaliação

    Extracting significant signal of news consumption from social networks: the case of Twitter in Italian political elections

    Get PDF
    According to the Eurobarometer report about EU media use of May 2018, the number of European citizens who consult on-line social networks for accessing information is considerably increasing. In this work we analyse approximately 106 tweets exchanged during the last Italian elections held on March 4, 2018. Using an entropy-based null model discounting the activity of the users, we first identify potential political alliances within the group of verified accounts: if two verified users are retweeted more than expected by the non-verified ones, they are likely to be related. Then, we derive the users’ affiliation to a coalition measuring the polarisation of unverified accounts. Finally, we study the bipartite directed representation of the tweets and retweets network, in which tweets and users are collected on the two layers. Users with the highest out-degree identify the most popular ones, whereas highest out-degree posts are the most “viral”. We identify significant content spreaders with a procedure that allows to statistically validate the connections that cannot be explained by users’ tweeting activity and posts’ virality, using an entropy-based null model as benchmark. The analysis of the directed network of validated retweets reveals signals of the alliances formed after the elections, highlighting commonalities of interests before the event of the national elections
    corecore