111 research outputs found

    Reverse Engineering Socialbot Infiltration Strategies in Twitter

    Full text link
    Data extracted from social networks like Twitter are increasingly being used to build applications and services that mine and summarize public reactions to events, such as traffic monitoring platforms, identification of epidemic outbreaks, and public perception about people and brands. However, such services are vulnerable to attacks from socialbots −- automated accounts that mimic real users −- seeking to tamper statistics by posting messages generated automatically and interacting with legitimate users. Potentially, if created in large scale, socialbots could be used to bias or even invalidate many existing services, by infiltrating the social networks and acquiring trust of other users with time. This study aims at understanding infiltration strategies of socialbots in the Twitter microblogging platform. To this end, we create 120 socialbot accounts with different characteristics and strategies (e.g., gender specified in the profile, how active they are, the method used to generate their tweets, and the group of users they interact with), and investigate the extent to which these bots are able to infiltrate the Twitter social network. Our results show that even socialbots employing simple automated mechanisms are able to successfully infiltrate the network. Additionally, using a 2k2^k factorial design, we quantify infiltration effectiveness of different bot strategies. Our analysis unveils findings that are key for the design of detection and counter measurements approaches

    Do Social Bots Dream of Electric Sheep? A Categorisation of Social Media Bot Accounts

    Get PDF
    So-called 'social bots' have garnered a lot of attention lately. Previous research showed that they attempted to influence political events such as the Brexit referendum and the US presidential elections. It remains, however, somewhat unclear what exactly can be understood by the term 'social bot'. This paper addresses the need to better understand the intentions of bots on social media and to develop a shared understanding of how 'social' bots differ from other types of bots. We thus describe a systematic review of publications that researched bot accounts on social media. Based on the results of this literature review, we propose a scheme for categorising bot accounts on social media sites. Our scheme groups bot accounts by two dimensions - Imitation of human behaviour and Intent.Comment: Accepted for publication in the Proceedings of the Australasian Conference on Information Systems, 201

    Online Human-Bot Interactions: Detection, Estimation, and Characterization

    Full text link
    Increasing evidence suggests that a growing amount of social media content is generated by autonomous entities known as social bots. In this work we present a framework to detect such entities on Twitter. We leverage more than a thousand features extracted from public data and meta-data about users: friends, tweet content and sentiment, network patterns, and activity time series. We benchmark the classification framework by using a publicly available dataset of Twitter bots. This training data is enriched by a manually annotated collection of active Twitter users that include both humans and bots of varying sophistication. Our models yield high accuracy and agreement with each other and can detect bots of different nature. Our estimates suggest that between 9% and 15% of active Twitter accounts are bots. Characterizing ties among accounts, we observe that simple bots tend to interact with bots that exhibit more human-like behaviors. Analysis of content flows reveals retweet and mention strategies adopted by bots to interact with different target groups. Using clustering analysis, we characterize several subclasses of accounts, including spammers, self promoters, and accounts that post content from connected applications.Comment: Accepted paper for ICWSM'17, 10 pages, 8 figures, 1 tabl
    • …
    corecore