2,169 research outputs found
Reverse Engineering Socialbot Infiltration Strategies in Twitter
Data extracted from social networks like Twitter are increasingly being used
to build applications and services that mine and summarize public reactions to
events, such as traffic monitoring platforms, identification of epidemic
outbreaks, and public perception about people and brands. However, such
services are vulnerable to attacks from socialbots automated accounts that
mimic real users seeking to tamper statistics by posting messages generated
automatically and interacting with legitimate users. Potentially, if created in
large scale, socialbots could be used to bias or even invalidate many existing
services, by infiltrating the social networks and acquiring trust of other
users with time. This study aims at understanding infiltration strategies of
socialbots in the Twitter microblogging platform. To this end, we create 120
socialbot accounts with different characteristics and strategies (e.g., gender
specified in the profile, how active they are, the method used to generate
their tweets, and the group of users they interact with), and investigate the
extent to which these bots are able to infiltrate the Twitter social network.
Our results show that even socialbots employing simple automated mechanisms are
able to successfully infiltrate the network. Additionally, using a
factorial design, we quantify infiltration effectiveness of different bot
strategies. Our analysis unveils findings that are key for the design of
detection and counter measurements approaches
Hate is not Binary: Studying Abusive Behavior of #GamerGate on Twitter
Over the past few years, online bullying and aggression have become
increasingly prominent, and manifested in many different forms on social media.
However, there is little work analyzing the characteristics of abusive users
and what distinguishes them from typical social media users. In this paper, we
start addressing this gap by analyzing tweets containing a great large amount
of abusiveness. We focus on a Twitter dataset revolving around the Gamergate
controversy, which led to many incidents of cyberbullying and cyberaggression
on various gaming and social media platforms. We study the properties of the
users tweeting about Gamergate, the content they post, and the differences in
their behavior compared to typical Twitter users.
We find that while their tweets are often seemingly about aggressive and
hateful subjects, "Gamergaters" do not exhibit common expressions of online
anger, and in fact primarily differ from typical users in that their tweets are
less joyful. They are also more engaged than typical Twitter users, which is an
indication as to how and why this controversy is still ongoing. Surprisingly,
we find that Gamergaters are less likely to be suspended by Twitter, thus we
analyze their properties to identify differences from typical users and what
may have led to their suspension. We perform an unsupervised machine learning
analysis to detect clusters of users who, though currently active, could be
considered for suspension since they exhibit similar behaviors with suspended
users. Finally, we confirm the usefulness of our analyzed features by emulating
the Twitter suspension mechanism with a supervised learning method, achieving
very good precision and recall.Comment: In 28th ACM Conference on Hypertext and Social Media (ACM HyperText
2017
Unsupervised detection of coordinated fake-follower campaigns on social media
Automated social media accounts, known as bots, are increasingly recognized
as key tools for manipulative online activities. These activities can stem from
coordination among several accounts and these automated campaigns can
manipulate social network structure by following other accounts, amplifying
their content, and posting messages to spam online discourse. In this study, we
present a novel unsupervised detection method designed to target a specific
category of malicious accounts designed to manipulate user metrics such as
online popularity. Our framework identifies anomalous following patterns among
all the followers of a social media account. Through the analysis of a large
number of accounts on the Twitter platform (rebranded as Twitter after the
acquisition of Elon Musk), we demonstrate that irregular following patterns are
prevalent and are indicative of automated fake accounts. Notably, we find that
these detected groups of anomalous followers exhibit consistent behavior across
multiple accounts. This observation, combined with the computational efficiency
of our proposed approach, makes it a valuable tool for investigating
large-scale coordinated manipulation campaigns on social media platforms.Comment: 17 pages, 5 figures, 1 table and supplementary informatio
- …