17,248 research outputs found

    The Elon Musk Paradox: Quantifying the Presence and Impact of Twitter Bots on Altmetrics with Focus in Social Sciences

    Get PDF
    With the rise of Twitter bots in social and political spheres, their implications in scientific communication and altmetrics have become a concern. However, there are no large-scale studies that identify the population of bots and their impact on altmetrics. This quantitative study aims to analyse the presence and impact of Twitter bots in the dissemination of Social Science papers on Twitter and to explore the specific case of Information Science & Library Science (ISLS) as a case study. The overall presence of bots discussing Social Science papers has been found to account for 3.61% of users and 3.85% of tweets. However, this presence and impact is uneven across disciplines, highlighting Criminology & Penology with 12.4% of the mentions made by bots. In the specific case of ISLS, it has been determined by Kendall's correlation that mentions of bots have no impact on altmetrics.Full paper available at: https://dapp.orvium.io/deposits/644235015db3c5af25159230/vie

    ‘Conspiracy Machines’ - The Role of Social Bots during the COVID-19 ‘Infodemic’

    Get PDF
    The omnipresent COVID-19 pandemic gave rise to a parallel spreading of misinformation, also referred to as an ‘Infodemic’. Consequently, social media have become targets for the application of social bots, that is, algorithms that mimic human behaviour. Their ability to exert influence on social media can be exploited by amplifying misinformation, rumours, or conspiracy theories which might be harmful to society and the mastery of the pandemic. By applying social bot detection and content analysis techniques, this study aims to determine the extent to which social bots interfere with COVID19 discussions on Twitter. A total of 78 presumptive bots were detected within a sample of 542,345 users. The analysis revealed that bot-like users who disseminate misinformation, at the same time, intersperse news from renowned sources. The findings of this research provide implications for improved bot detection and managing potential threats through social bots during ongoing and future crises

    A Comparative Analysis of Facebook and Twitter Bots

    Get PDF
    The increasing level of sophistication in the field of machine learning and artificial intelligence has engendered the creation of automated programs called \u27bots\u27. Bots are created for varied reasons, and it has become imperative to evaluate the impact of bots on the social media ecosystem and consequently on our daily lives. However, despite the ubiquity of bots, very little research has been conducted to compare their trends and impacts on the social media ecosystem. To address this gap in bot research, we perform a comparative analysis of Facebook and Twitter bots, in terms of their popularity and impact on the society. This paper sets the foundation to allow for subsequent, more detailed studies into this subject area. Analyzing trends of these emerging technologies can provide insight into identifying their importance and roles in our everyday life. We provide a brief background of the subject, such as types of social bots and their utility, bot detection techniques, and the impact they have had on society. We then utilize the IBM Watson cognitive search and content analytics engine to examine the public perception of these bots. We also use the Google query volumes to investigate the trends of search terms related to Facebook and Twitter bots. Our findings suggest that there is a slightly higher public acceptance of Facebook bots as compared to Twitter bots. Furthermore, the utilization of bots on Online Social Networks (OSNs) is on the rise. Originally, bots were developed as a tool for driving user engagements on social media platforms. However, today bots are increasingly being used to convey mis/disinformation and political propaganda

    Promotional Campaigns in the Era of Social Platforms

    Get PDF
    The rise of social media has facilitated the diffusion of information to more easily reach millions of users. While some users connect with friends and organically share information and opinions on social media, others have exploited these platforms to gain influence and profit through promotional campaigns and advertising. The existence of promotional campaigns contributes to the spread of misleading information, spam, and fake news. Thus, these campaigns affect the trustworthiness and reliability of social media and render it as a crowd advertising platform. This dissertation studies the existence of promotional campaigns in social media and explores different ways users and bots (i.e. automated accounts) engage in such campaigns. In this dissertation, we design a suite of detection, ranking, and mining techniques. We study user-generated reviews in online e-commerce sites, such as Google Play, to extract campaigns. We identify cooperating sets of bots and classify their interactions in social networks such as Twitter, and rank the bots based on the degree of their malevolence. Our study shows that modern online social interactions are largely modulated by promotional campaigns such as political campaigns, advertisement campaigns, and incentive-driven campaigns. We measure how these campaigns can potentially impact information consumption of millions of social media users

    Detecting Bots Using a Hybrid Approach

    Get PDF
    Artificial intelligence (AI) remains a crucial aspect for improving our modern lives but it also casts several social and ethical issues. One issue is of major concern, investigated in this research, is the amount of content users consume that is being generated by a form of AI known as bots (automated software programs). With the rise of social bots and the spread of fake news more research is required to understand how much content generated by bots is being consumed. This research investigates the amount of bot generated content relating to COVID-19. While research continues to uncover the extent to which our social media platforms are being used as a terrain to spread information and misinformation, there still remain issues when it comes to distinguishing between social bots and humans that spread misinformation. Since online platforms have become a center for spreading fake information that is often accelerated using bots this research examines the amount of bot generated COVID-19 content on Twitter. A hybrid approach is presented to detect bots using a Covid-19 dataset of 71,908 tweets collected between January 22nd, 2020 and April 2020, when the total reported cases of Covid-19 were below 600 globally. Three experiments were conducted using user account features, topic analysis, and sentiment features to detect bots and misinformation relating to the Covid-19 pandemic. Using Weka Machine Learning Tool, Experiment I investigates the optimal algorithms that can be used to detect bots on Twitter. We used 10-fold cross validation to test for prediction accuracy on two labelled datasets. Each dataset contains a different set (category 1 and category 2) of four features. Results from Experiment I show that category 1 features (favorite count, listed count, name length, and number of tweets) combined with random forest algorithm produced the best prediction accuracy and performed better than features found in category 2 (follower count, following count, length of screen name and description length). The best feature was listed count followed by favorite count. It was also observed that using category 2 features for the two labelled datasets produced the same prediction accuracy (100%) when Tree based classifiers are used. To further investigate the validity of the features used in the two labelled datasets, in Experiment II, each labelled dataset from Experiment I was used as a training sample to classify two different labelled datasets. Results show that Category 1 features generated a 94% prediction accuracy as compared to 60% accuracy generated by category 2 features using the Random Forest algorithm. Experiment III applies the results from Experiment I and II to classify 39,091 account that posted Coronavirus related content. Using the random forest algorithm and features identified Experiment I and II, our classification framework detected 5867 out of 39,091 (15%) account as bots and 33,224 (85%) accounts as humans. Further analysis revealed that bot accounts generated 30% (1949/6446) of Coronavirus misinformation compared to 70% of misinformation created by human accounts. Closer examination showed that about 30% of misinformation created by humans were retweets of bot content. In addition, results suggest that bot accounts were involved in posting content on fewer topics compared to humans. Our results also show that bots generated more negative sentiments as compared to humans on Covid-19 related issues. Consequently, topic distribution and sentiment may further improve the ability to distinguish between bot and human accounts

    Even good bots fight: the case of Wikipedia

    Get PDF
    In recent years, there has been a huge increase in the number of bots online, varying from Web crawlers for search engines, to chatbots for online customer service, spambots on social media, and content-editing bots in online collaboration communities. The online world has turned into an ecosystem of bots. However, our knowledge of how these automated agents are interacting with each other is rather poor. Bots are predictable automatons that do not have the capacity for emotions, meaning-making, creativity, and sociality and it is hence natural to expect interactions between bots to be relatively predictable and uneventful. In this article, we analyze the interactions between bots that edit articles on Wikipedia. We track the extent to which bots undid each other’s edits over the period 2001–2010, model how pairs of bots interact over time, and identify different types of interaction trajectories. We find that, although Wikipedia bots are intended to support the encyclopedia, they often undo each other’s edits and these sterile “fights” may sometimes continue for years. Unlike humans on Wikipedia, bots’ interactions tend to occur over longer periods of time and to be more reciprocated. Yet, just like humans, bots in different cultural environments may behave differently. Our research suggests that even relatively “dumb” bots may give rise to complex interactions, and this carries important implications for Artificial Intelligence research. Understanding what affects bot-bot interactions is crucial for managing social media well, providing adequate cyber-security, and designing well functioning autonomous vehicles

    On the influence of social bots in online protests. Preliminary findings of a Mexican case study

    Full text link
    Social bots can affect online communication among humans. We study this phenomenon by focusing on #YaMeCanse, the most active protest hashtag in the history of Twitter in Mexico. Accounts using the hashtag are classified using the BotOrNot bot detection tool. Our preliminary analysis suggests that bots played a critical role in disrupting online communication about the protest movement.Comment: 10 page
    • 

    corecore