31 research outputs found

    Online Human-Bot Interactions: Detection, Estimation, and Characterization

    Full text link
    Increasing evidence suggests that a growing amount of social media content is generated by autonomous entities known as social bots. In this work we present a framework to detect such entities on Twitter. We leverage more than a thousand features extracted from public data and meta-data about users: friends, tweet content and sentiment, network patterns, and activity time series. We benchmark the classification framework by using a publicly available dataset of Twitter bots. This training data is enriched by a manually annotated collection of active Twitter users that include both humans and bots of varying sophistication. Our models yield high accuracy and agreement with each other and can detect bots of different nature. Our estimates suggest that between 9% and 15% of active Twitter accounts are bots. Characterizing ties among accounts, we observe that simple bots tend to interact with bots that exhibit more human-like behaviors. Analysis of content flows reveals retweet and mention strategies adopted by bots to interact with different target groups. Using clustering analysis, we characterize several subclasses of accounts, including spammers, self promoters, and accounts that post content from connected applications.Comment: Accepted paper for ICWSM'17, 10 pages, 8 figures, 1 tabl

    Social Bots As an Instrument of Influence in Social Networks: Typologization Problems

    Get PDF
    Nowadays, in the field of social bots investigations, we can observe a new research trend — a shift from a technology-centered to sociology-centered interpretations. It leads to the creation of new perspectives for sociology: now the phenomenon of social bots is not only considered as one of the efficient manipulative technologies but has a wider meaning: new communicative technologies have an informational impact on the social networks space. The objective of this research is to assess the new approaches of the established social bots typologies (based on the fields of their usage, objectives, degree of human behavior imitation), and also consider the ambiguity and controversy of the use of such typologies using the example of botnets operating in the VKontakte social network. A method of botnet identification is based on comprehensive methodology developed by the authors which includes the frequency analysis of published messages, botnet profiling, statistical analysis of content, analysis of botnet structural organization, division of content into semantic units, forming content clusters, content analysis inside the clusters, identification of extremes — maximum number of unique texts published by botnets in a particular cluster for a certain period. The methodology was applied for the botnet space investigation of Russian online social network VKontakte in February and October 2018. The survey has fixed that among 10 of the most active performing botnets, three botnets were identified that demonstrate the ambiguity and controversy of their typologization according to the following criteria: botnet “Defrauded shareholders of LenSpetsStroy” — according to the field of their usage, botnet “Political news in Russian and Ukrainian languages” — according to their objectives, botnet “Ksenia Sobchak” — according to the level of human behavior imitation. The authors identified the prospects for sociological analysis of different types of bots in a situation of growing accessibility and routinization of bot technologies used in social networks. Keywords: social bots, botnets, classification, VKontakte social networ

    Arming the public with artificial intelligence to counter social bots

    Full text link
    The increased relevance of social media in our daily life has been accompanied by efforts to manipulate online conversations and opinions. Deceptive social bots -- automated or semi-automated accounts designed to impersonate humans -- have been successfully exploited for these kinds of abuse. Researchers have responded by developing AI tools to arm the public in the fight against social bots. Here we review the literature on different types of bots, their impact, and detection methods. We use the case study of Botometer, a popular bot detection tool developed at Indiana University, to illustrate how people interact with AI countermeasures. A user experience survey suggests that bot detection has become an integral part of the social media experience for many users. However, barriers in interpreting the output of AI tools can lead to fundamental misunderstandings. The arms race between machine learning methods to develop sophisticated bots and effective countermeasures makes it necessary to update the training data and features of detection tools. We again use the Botometer case to illustrate both algorithmic and interpretability improvements of bot scores, designed to meet user expectations. We conclude by discussing how future AI developments may affect the fight between malicious bots and the public.Comment: Published in Human Behavior and Emerging Technologie
    corecore