511 research outputs found
The paradigm-shift of social spambots: Evidence, theories, and tools for the arms race
Recent studies in social media spam and automation provide anecdotal
argumentation of the rise of a new generation of spambots, so-called social
spambots. Here, for the first time, we extensively study this novel phenomenon
on Twitter and we provide quantitative evidence that a paradigm-shift exists in
spambot design. First, we measure current Twitter's capabilities of detecting
the new social spambots. Later, we assess the human performance in
discriminating between genuine accounts, social spambots, and traditional
spambots. Then, we benchmark several state-of-the-art techniques proposed by
the academic literature. Results show that neither Twitter, nor humans, nor
cutting-edge applications are currently capable of accurately detecting the new
social spambots. Our results call for new approaches capable of turning the
tide in the fight against this raising phenomenon. We conclude by reviewing the
latest literature on spambots detection and we highlight an emerging common
research trend based on the analysis of collective behaviors. Insights derived
from both our extensive experimental campaign and survey shed light on the most
promising directions of research and lay the foundations for the arms race
against the novel social spambots. Finally, to foster research on this novel
phenomenon, we make publicly available to the scientific community all the
datasets used in this study.Comment: To appear in Proc. 26th WWW, 2017, Companion Volume (Web Science
Track, Perth, Australia, 3-7 April, 2017
Online Misinformation: Challenges and Future Directions
Misinformation has become a common part of our digital media environments and it is compromising the ability of our societies to form informed opinions. It generates misperceptions, which have affected the decision making processes in many domains, including economy, health, environment, and elections, among others. Misinformation and its generation, propagation, impact, and management is being studied through a variety of lenses (computer science, social science, journalism, psychology, etc.) since it widely affects multiple aspects of society. In this paper we analyse the phenomenon of misinformation from a technological point of view.We study the current socio-technical advancements towards addressing the problem, identify some of the key limitations of current technologies, and propose some ideas to target such limitations. The goal of this position paper is to reflect on the current state of the art and to stimulate discussions on the future design and development of algorithms, methodologies, and applications
Analyzing Activity and Suspension Patterns of Twitter Bots Attacking Turkish Twitter Trends by a Longitudinal Dataset
Twitter bots amplify target content in a coordinated manner to make them
appear popular, which is an astroturfing attack. Such attacks promote certain
keywords to push them to Twitter trends to make them visible to a broader
audience. Past work on such fake trends revealed a new astroturfing attack
named ephemeral astroturfing that employs a very unique bot behavior in which
bots post and delete generated tweets in a coordinated manner. As such, it is
easy to mass-annotate such bots reliably, making them a convenient source of
ground truth for bot research. In this paper, we detect and disclose over
212,000 such bots targeting Turkish trends, which we name astrobots. We also
analyze their activity and suspension patterns. We found that Twitter purged
those bots en-masse 6 times since June 2018. However, the adversaries reacted
quickly and deployed new bots that were created years ago. We also found that
many such bots do not post tweets apart from promoting fake trends, which makes
it challenging for bot detection methods to detect them. Our work provides
insights into platforms' content moderation practices and bot detection
research. The dataset is publicly available at
https://github.com/tugrulz/EphemeralAstroturfing.Comment: Accepted to Cyber Social Threats (CySoc) 2023 colocated with
WebConf2
Fame for sale: efficient detection of fake Twitter followers
are those Twitter accounts specifically created to
inflate the number of followers of a target account. Fake followers are
dangerous for the social platform and beyond, since they may alter concepts
like popularity and influence in the Twittersphere - hence impacting on
economy, politics, and society. In this paper, we contribute along different
dimensions. First, we review some of the most relevant existing features and
rules (proposed by Academia and Media) for anomalous Twitter accounts
detection. Second, we create a baseline dataset of verified human and fake
follower accounts. Such baseline dataset is publicly available to the
scientific community. Then, we exploit the baseline dataset to train a set of
machine-learning classifiers built over the reviewed rules and features. Our
results show that most of the rules proposed by Media provide unsatisfactory
performance in revealing fake followers, while features proposed in the past by
Academia for spam detection provide good results. Building on the most
promising features, we revise the classifiers both in terms of reduction of
overfitting and cost for gathering the data needed to compute the features. The
final result is a novel classifier, general enough to thwart
overfitting, lightweight thanks to the usage of the less costly features, and
still able to correctly classify more than 95% of the accounts of the original
training set. We ultimately perform an information fusion-based sensitivity
analysis, to assess the global sensitivity of each of the features employed by
the classifier. The findings reported in this paper, other than being supported
by a thorough experimental methodology and interesting on their own, also pave
the way for further investigation on the novel issue of fake Twitter followers
A Survey on Computational Propaganda Detection
Propaganda campaigns aim at influencing people's mindset with the purpose of
advancing a specific agenda. They exploit the anonymity of the Internet, the
micro-profiling ability of social networks, and the ease of automatically
creating and managing coordinated networks of accounts, to reach millions of
social network users with persuasive messages, specifically targeted to topics
each individual user is sensitive to, and ultimately influencing the outcome on
a targeted issue. In this survey, we review the state of the art on
computational propaganda detection from the perspective of Natural Language
Processing and Network Analysis, arguing about the need for combined efforts
between these communities. We further discuss current challenges and future
research directions.Comment: propaganda detection, disinformation, misinformation, fake news,
media bia
RTbust: Exploiting Temporal Patterns for Botnet Detection on Twitter
Within OSNs, many of our supposedly online friends may instead be fake
accounts called social bots, part of large groups that purposely re-share
targeted content. Here, we study retweeting behaviors on Twitter, with the
ultimate goal of detecting retweeting social bots. We collect a dataset of 10M
retweets. We design a novel visualization that we leverage to highlight benign
and malicious patterns of retweeting activity. In this way, we uncover a
'normal' retweeting pattern that is peculiar of human-operated accounts, and 3
suspicious patterns related to bot activities. Then, we propose a bot detection
technique that stems from the previous exploration of retweeting behaviors. Our
technique, called Retweet-Buster (RTbust), leverages unsupervised feature
extraction and clustering. An LSTM autoencoder converts the retweet time series
into compact and informative latent feature vectors, which are then clustered
with a hierarchical density-based algorithm. Accounts belonging to large
clusters characterized by malicious retweeting patterns are labeled as bots.
RTbust obtains excellent detection results, with F1 = 0.87, whereas competitors
achieve F1 < 0.76. Finally, we apply RTbust to a large dataset of retweets,
uncovering 2 previously unknown active botnets with hundreds of accounts
Discovery of the Twitter Bursty Botnet
Many Twitter users are bots. They can be used for spamming, opinion manipulation and online fraud. Recently, we discovered the Star Wars botnet, consisting of more than 350,000 bots tweeting random quotations exclusively from Star Wars novels. The bots were exposed because they tweeted uniformly from any location within two rectangle-shaped geographic zones covering Europe and the USA, including sea and desert areas in the zones. In this chapter, we report another unusual behaviour of the Star Wars bots, that the bots were created in bursts or batches, and they only tweeted in their first few minutes since creation. Inspired by this observation, we discovered an even larger Twitter botnet, the Bursty botnet with more than 500,000 bots. Our preliminary study showed that the Bursty botnet was directly responsible for a large-scale online spamming attack in 2012. Most bot detection algorithms have been based on assumptions of “common” features that were supposedly shared by all bots. Our discovered botnets, however, do not show many of those features; instead, they were detected by their distinct, unusual tweeting behaviours that were unknown until now
- …