258 research outputs found
Who let the trolls out? Towards understanding state-sponsored trolls
Recent evidence has emerged linking coordinated campaigns by state-sponsored actors to manipulate public opinion on the Web. Campaigns revolving around major political events are enacted via mission-focused ?trolls." While trolls are involved in spreading disinformation on social media, there is little understanding of how they operate, what type of content they disseminate, how their strategies evolve over time, and how they influence the Web's in- formation ecosystem. In this paper, we begin to address this gap by analyzing 10M posts by 5.5K Twitter and Reddit users identified as Russian and Iranian state-sponsored trolls. We compare the behavior of each group of state-sponsored trolls with a focus on how their strategies change over time, the different campaigns they embark on, and differences between the trolls operated by Russia and Iran. Among other things, we find: 1) that Russian trolls were pro-Trump while Iranian trolls were anti-Trump; 2) evidence that campaigns undertaken by such actors are influenced by real-world events; and 3) that the behavior of such actors is not consistent over time, hence detection is not straightforward. Using Hawkes Processes, we quantify the influence these accounts have on pushing URLs on four platforms: Twitter, Reddit, 4chan's Politically Incorrect board (/pol/), and Gab. In general, Russian trolls were more influential and efficient in pushing URLs to all the other platforms with the exception of /pol/ where Iranians were more influential. Finally, we release our source code to ensure the reproducibility of our results and to encourage other researchers to work on understanding other emerging kinds of state-sponsored troll accounts on Twitter.https://arxiv.org/pdf/1811.03130.pdfAccepted manuscrip
Characterizing the Use of Images in State-Sponsored Information Warfare Operations by Russian Trolls on Twitter
State-sponsored organizations are increasingly linked to efforts aimed to
exploit social media for information warfare and manipulating public opinion.
Typically, their activities rely on a number of social network accounts they
control, aka trolls, that post and interact with other users disguised as
"regular" users. These accounts often use images and memes, along with textual
content, in order to increase the engagement and the credibility of their
posts.
In this paper, we present the first study of images shared by state-sponsored
accounts by analyzing a ground truth dataset of 1.8M images posted to Twitter
by accounts controlled by the Russian Internet Research Agency. First, we
analyze the content of the images as well as their posting activity. Then,
using Hawkes Processes, we quantify their influence on popular Web communities
like Twitter, Reddit, 4chan's Politically Incorrect board (/pol/), and Gab,
with respect to the dissemination of images. We find that the extensive image
posting activity of Russian trolls coincides with real-world events (e.g., the
Unite the Right rally in Charlottesville), and shed light on their targets as
well as the content disseminated via images. Finally, we show that the trolls
were more effective in disseminating politics-related imagery than other
images.Comment: To appear at the 14th International AAAI Conference on Web and Social
Media (ICWSM 2020). Please cite accordingl
Characterizing the Use of Images in State-Sponsored Information Warfare Operations by {R}ussian {Tr}olls on Twitter
State-sponsored organizations are increasingly linked to efforts aimed to exploit social media for information warfare and manipulating public opinion. Typically, their activities rely on a number of social network accounts they control, aka trolls, that post and interact with other users disguised as "regular" users. These accounts often use images and memes, along with textual content, in order to increase the engagement and the credibility of their posts. In this paper, we present the first study of images shared by state-sponsored accounts by analyzing a ground truth dataset of 1.8M images posted to Twitter by accounts controlled by the Russian Internet Research Agency. First, we analyze the content of the images as well as their posting activity. Then, using Hawkes Processes, we quantify their influence on popular Web communities like Twitter, Reddit, 4chan's Politically Incorrect board (/pol/), and Gab, with respect to the dissemination of images. We find that the extensive image posting activity of Russian trolls coincides with real-world events (e.g., the Unite the Right rally in Charlottesville), and shed light on their targets as well as the content disseminated via images. Finally, we show that the trolls were more effective in disseminating politics-related imagery than other images
Characterizing and Detecting State-Sponsored Troll Activity on Social Media
The detection of state-sponsored trolls acting in information operations is
an unsolved and critical challenge for the research community, with
repercussions that go beyond the online realm. In this paper, we propose a
novel AI-based solution for the detection of state-sponsored troll accounts,
which consists of two steps. The first step aims at classifying trajectories of
accounts' online activities as belonging to either a state-sponsored troll or
to an organic user account. In the second step, we exploit the classified
trajectories to compute a metric, namely "troll score", which allows us to
quantify the extent to which an account behaves like a state-sponsored troll.
As a study case, we consider the troll accounts involved in the Russian
interference campaign during the 2016 US Presidential election, identified as
Russian trolls by the US Congress. Experimental results show that our approach
identifies accounts' trajectories with an AUC close to 99\% and, accordingly,
classify Russian trolls and organic users with an AUC of 97\%. Finally, we
evaluate whether the proposed solution can be generalized to different contexts
(e.g., discussions about Covid-19) and generic misbehaving users, showing
promising results that will be further expanded in our future endeavors.Comment: 15 page
The Impact of Culture on Online Toxic Disinhibition: Trolling in India and the USA
The pervasiveness of online trolling has been attributed to the effect of online toxic disinhibition, suggesting that perpetrators behave in less socially desirable ways online than they do offline. It is possible that this disinhibition effect allows for everyone to start on a level playing field online, regardless of race, gender, or nationality, but it is likewise possible that the disinhibition effect is context-dependent and sensitive to socio-cultural variations. We aim to explore if toxic online disinhibition effects depend on national culture and gender by examining the extent of trolling towards tweets by Americans and Indians, from both genders. Content analysis of 3,000 Twitter posts reveals that significantly more trolling comments were posted on tweets by Americans than by Indians, and on tweets by women than men. We conclude that the online disinhibition effect may exacerbate, replicate, or mediate existing socio-cultural differences, but it does not eliminate them
The Web of False Information: Rumors, Fake News, Hoaxes, Clickbait, and Various Other Shenanigans
A new era of Information Warfare has arrived. Various actors, including
state-sponsored ones, are weaponizing information on Online Social Networks to
run false information campaigns with targeted manipulation of public opinion on
specific topics. These false information campaigns can have dire consequences
to the public: mutating their opinions and actions, especially with respect to
critical world events like major elections. Evidently, the problem of false
information on the Web is a crucial one, and needs increased public awareness,
as well as immediate attention from law enforcement agencies, public
institutions, and in particular, the research community. In this paper, we make
a step in this direction by providing a typology of the Web's false information
ecosystem, comprising various types of false information, actors, and their
motives. We report a comprehensive overview of existing research on the false
information ecosystem by identifying several lines of work: 1) how the public
perceives false information; 2) understanding the propagation of false
information; 3) detecting and containing false information on the Web; and 4)
false information on the political stage. In this work, we pay particular
attention to political false information as: 1) it can have dire consequences
to the community (e.g., when election results are mutated) and 2) previous work
show that this type of false information propagates faster and further when
compared to other types of false information. Finally, for each of these lines
of work, we report several future research directions that can help us better
understand and mitigate the emerging problem of false information dissemination
on the Web
Fight Fire with Fire: Hacktivists' Take on Social Media Misinformation
In this study, we interviewed 22 prominent hacktivists to learn their take on
the increased proliferation of misinformation on social media. We found that
none of them welcomes the nefarious appropriation of trolling and memes for the
purpose of political (counter)argumentation and dissemination of propaganda.
True to the original hacker ethos, misinformation is seen as a threat to the
democratic vision of the Internet, and as such, it must be confronted on the
face with tried hacktivists' methods like deplatforming the "misinformers" and
doxing or leaking data about their funding and recruitment. The majority of the
hacktivists also recommended interventions for raising misinformation literacy
in addition to targeted hacking campaigns. We discuss the implications of these
findings relative to the emergent recasting of hacktivism in defense of a
constructive and factual social media discourse
- …