4 research outputs found
Coordinated Behavior on Social Media in 2019 UK General Election
Coordinated online behaviors are an essential part of information and
influence operations, as they allow a more effective disinformation's spread.
Most studies on coordinated behaviors involved manual investigations, and the
few existing computational approaches make bold assumptions or oversimplify the
problem to make it tractable. Here, we propose a new network-based framework
for uncovering and studying coordinated behaviors on social media. Our research
extends existing systems and goes beyond limiting binary classifications of
coordinated and uncoordinated behaviors. It allows to expose different
coordination patterns and to estimate the degree of coordination that
characterizes diverse communities. We apply our framework to a dataset
collected during the 2019 UK General Election, detecting and characterizing
coordinated communities that participated in the electoral debate. Our work
conveys both theoretical and practical implications and provides more nuanced
and fine-grained results for studying online information manipulation.Comment: Version accepted in Proc. AAAI Intl. Conference on Web and Social
Media (ICWSM) 2021. Added dataset DO
Coordinated amplification, coordinated inauthentic behaviour, orchestrated campaigns:A systematic literature review of coordinated inauthentic content on online social networks
The internet and online social networks have resulted in dramatic changes in the information landscape. Pessimistic views fear that networks and algorithms can limit exposure to various content by exposing users to pre-existing beliefs. In this respect, coordinated campaigns can amplify these individuals' voices above the crowd, capable of hijacking conversations, influencing other users and manipulating content dissemination. Through a systematic literature review, this chapter locates and synthesises related research on coordinated activities to (i) describe the state of this field by identifying the patterns and trends in the conceptual and methodological approaches, topics and practices; and (ii) shed light on potentially essential gaps in the field and suggest recommendations for future research. Findings show an evolution of the approaches used to detect coordinated activities. While bot detection was the focus in the early years, more recent research focused on using advanced computational methods based on training datasets or identifying coordinated campaigns by timely and similar content. Due to the data availability, Twitter is the most studied online social network, although studies have shown that coordinated activities can be found on other platforms. We conclude by discussing the implications of current approaches and outlining an agenda for future research
A General Language for Modeling Social Media Account Behavior
Malicious actors exploit social media to inflate stock prices, sway
elections, spread misinformation, and sow discord. To these ends, they employ
tactics that include the use of inauthentic accounts and campaigns. Methods to
detect these abuses currently rely on features specifically designed to target
suspicious behaviors. However, the effectiveness of these methods decays as
malicious behaviors evolve. To address this challenge, we propose a general
language for modeling social media account behavior. Words in this language,
called BLOC, consist of symbols drawn from distinct alphabets representing user
actions and content. The language is highly flexible and can be applied to
model a broad spectrum of legitimate and suspicious online behaviors without
extensive fine-tuning. Using BLOC to represent the behaviors of Twitter
accounts, we achieve performance comparable to or better than state-of-the-art
methods in the detection of social bots and coordinated inauthentic behavior
A Decade of Social Bot Detection
On the morning of November 9th 2016, the world woke up to the shocking
outcome of the US Presidential elections: Donald Trump was the 45th President
of the United States of America. An unexpected event that still has tremendous
consequences all over the world. Today, we know that a minority of social bots,
automated social media accounts mimicking humans, played a central role in
spreading divisive messages and disinformation, possibly contributing to
Trump's victory. In the aftermath of the 2016 US elections, the world started
to realize the gravity of widespread deception in social media. Following
Trump's exploit, we witnessed to the emergence of a strident dissonance between
the multitude of efforts for detecting and removing bots, and the increasing
effects that these malicious actors seem to have on our societies. This paradox
opens a burning question: What strategies should we enforce in order to stop
this social bot pandemic? In these times, during the run-up to the 2020 US
elections, the question appears as more crucial than ever. What stroke social,
political and economic analysts after 2016, deception and automation, has been
however a matter of study for computer scientists since at least 2010. In this
work, we briefly survey the first decade of research in social bot detection.
Via a longitudinal analysis, we discuss the main trends of research in the
fight against bots, the major results that were achieved, and the factors that
make this never-ending battle so challenging. Capitalizing on lessons learned
from our extensive analysis, we suggest possible innovations that could give us
the upper hand against deception and manipulation. Studying a decade of
endeavours at social bot detection can also inform strategies for detecting and
mitigating the effects of other, more recent, forms of online deception, such
as strategic information operations and political trolls.Comment: Forthcoming in Communications of the AC