18,907 research outputs found
Controllability of Social Networks and the Strategic Use of Random Information
This work is aimed at studying realistic social control strategies for social
networks based on the introduction of random information into the state of
selected driver agents. Deliberately exposing selected agents to random
information is a technique already experimented in recommender systems or
search engines, and represents one of the few options for influencing the
behavior of a social context that could be accepted as ethical, could be fully
disclosed to members, and does not involve the use of force or of deception.
Our research is based on a model of knowledge diffusion applied to a
time-varying adaptive network, and considers two well-known strategies for
influencing social contexts. One is the selection of few influencers for
manipulating their actions in order to drive the whole network to a certain
behavior; the other, instead, drives the network behavior acting on the state
of a large subset of ordinary, scarcely influencing users. The two approaches
have been studied in terms of network and diffusion effects. The network effect
is analyzed through the changes induced on network average degree and
clustering coefficient, while the diffusion effect is based on two ad-hoc
metrics defined to measure the degree of knowledge diffusion and skill level,
as well as the polarization of agent interests. The results, obtained through
simulations on synthetic networks, show a rich dynamics and strong effects on
the communication structure and on the distribution of knowledge and skills,
supporting our hypothesis that the strategic use of random information could
represent a realistic approach to social network controllability, and that with
both strategies, in principle, the control effect could be remarkable
Analyzing the Digital Traces of Political Manipulation: The 2016 Russian Interference Twitter Campaign
Until recently, social media was seen to promote democratic discourse on
social and political issues. However, this powerful communication platform has
come under scrutiny for allowing hostile actors to exploit online discussions
in an attempt to manipulate public opinion. A case in point is the ongoing U.S.
Congress' investigation of Russian interference in the 2016 U.S. election
campaign, with Russia accused of using trolls (malicious accounts created to
manipulate) and bots to spread misinformation and politically biased
information. In this study, we explore the effects of this manipulation
campaign, taking a closer look at users who re-shared the posts produced on
Twitter by the Russian troll accounts publicly disclosed by U.S. Congress
investigation. We collected a dataset with over 43 million election-related
posts shared on Twitter between September 16 and October 21, 2016, by about 5.7
million distinct users. This dataset included accounts associated with the
identified Russian trolls. We use label propagation to infer the ideology of
all users based on the news sources they shared. This method enables us to
classify a large number of users as liberal or conservative with precision and
recall above 90%. Conservatives retweeted Russian trolls about 31 times more
often than liberals and produced 36x more tweets. Additionally, most retweets
of troll content originated from two Southern states: Tennessee and Texas.
Using state-of-the-art bot detection techniques, we estimated that about 4.9%
and 6.2% of liberal and conservative users respectively were bots. Text
analysis on the content shared by trolls reveals that they had a mostly
conservative, pro-Trump agenda. Although an ideologically broad swath of
Twitter users was exposed to Russian Trolls in the period leading up to the
2016 U.S. Presidential election, it was mainly conservatives who helped amplify
their message
- …