51 research outputs found
This Just In: Fake News Packs a Lot in Title, Uses Simpler, Repetitive Content in Text Body, More Similar to Satire than Real News
The problem of fake news has gained a lot of attention as it is claimed to
have had a significant impact on 2016 US Presidential Elections. Fake news is
not a new problem and its spread in social networks is well-studied. Often an
underlying assumption in fake news discussion is that it is written to look
like real news, fooling the reader who does not check for reliability of the
sources or the arguments in its content. Through a unique study of three data
sets and features that capture the style and the language of articles, we show
that this assumption is not true. Fake news in most cases is more similar to
satire than to real news, leading us to conclude that persuasion in fake news
is achieved through heuristics rather than the strength of arguments. We show
overall title structure and the use of proper nouns in titles are very
significant in differentiating fake from real. This leads us to conclude that
fake news is targeted for audiences who are not likely to read beyond titles
and is aimed at creating mental associations between entities and claims.Comment: Published at The 2nd International Workshop on News and Public
Opinion at ICWS
Debunking in a World of Tribes
Recently a simple military exercise on the Internet was perceived as the
beginning of a new civil war in the US. Social media aggregate people around
common interests eliciting a collective framing of narratives and worldviews.
However, the wide availability of user-provided content and the direct path
between producers and consumers of information often foster confusion about
causations, encouraging mistrust, rumors, and even conspiracy thinking. In
order to contrast such a trend attempts to \textit{debunk} are often
undertaken. Here, we examine the effectiveness of debunking through a
quantitative analysis of 54 million users over a time span of five years (Jan
2010, Dec 2014). In particular, we compare how users interact with proven
(scientific) and unsubstantiated (conspiracy-like) information on Facebook in
the US. Our findings confirm the existence of echo chambers where users
interact primarily with either conspiracy-like or scientific pages. Both groups
interact similarly with the information within their echo chamber. We examine
47,780 debunking posts and find that attempts at debunking are largely
ineffective. For one, only a small fraction of usual consumers of
unsubstantiated information interact with the posts. Furthermore, we show that
those few are often the most committed conspiracy users and rather than
internalizing debunking information, they often react to it negatively. Indeed,
after interacting with debunking posts, users retain, or even increase, their
engagement within the conspiracy echo chamber
Impact of memory and bias in kinetic exchange opinion models on random networks
In this work we consider the effects of memory and bias in kinetic exchange
opinion models. We propose a model in which agents remember the sign of their
last interaction with each one of their pairs. This introduces memory effects
in the model, since past interactions can affect future ones. We have also
considered the impact of a parameter that regulates how often an agent
changes its interaction to match its opinion, thus introducing bias in the
interactions. For high values of an agent is more likely to start having a
negative interaction with an agent of opposing opinion and a positive
interaction with an agent of the same opinion. The model is defined on the top
of random networks with mean connectivity . We analyze the
impact of both and on the emergence of ordered and
disordered states in the population. Our results suggest a rich phenomenology
regarding critical phenomena, with the presence of metastable states and a
non-monotonic behavior of the order parameter. We show that the fraction of
neutral agents in the disordered state decreases as the bias increases
Multilingual Twitter Sentiment Classification: The Role of Human Annotators
What are the limits of automated Twitter sentiment classification? We analyze
a large set of manually labeled tweets in different languages, use them as
training data, and construct automated classification models. It turns out that
the quality of classification models depends much more on the quality and size
of training data than on the type of the model trained. Experimental results
indicate that there is no statistically significant difference between the
performance of the top classification models. We quantify the quality of
training data by applying various annotator agreement measures, and identify
the weakest points of different datasets. We show that the model performance
approaches the inter-annotator agreement when the size of the training set is
sufficiently large. However, it is crucial to regularly monitor the self- and
inter-annotator agreements since this improves the training datasets and
consequently the model performance. Finally, we show that there is strong
evidence that humans perceive the sentiment classes (negative, neutral, and
positive) as ordered
The role of bot squads in the political propaganda on Twitter
Social Media are nowadays the privileged channel for information spreading
and news checking. Unexpectedly for most of the users, automated accounts, also
known as social bots, contribute more and more to this process of news
spreading. Using Twitter as a benchmark, we consider the traffic exchanged,
over one month of observation, on a specific topic, namely the migration flux
from Northern Africa to Italy. We measure the significant traffic of tweets
only, by implementing an entropy-based null model that discounts the activity
of users and the virality of tweets. Results show that social bots play a
central role in the exchange of significant content. Indeed, not only the
strongest hubs have a number of bots among their followers higher than
expected, but furthermore a group of them, that can be assigned to the same
political tendency, share a common set of bots as followers. The retwitting
activity of such automated accounts amplifies the presence on the platform of
the hubs' messages.Comment: Under Submissio
Contextual Components of the Information Flow in Social Networks: Lessons of the COVID‑19 Information Epidemic
Благодарности: научному руководителю О. Н. Каткову, кандидату иисторических наук, доценту кафедры.Acknowledgments: to the scientific supervisor O. Katkov, Candidate of Historical Sciences, Associate Professor.В статье описываются три типа информационного контекста социальных сетей —слабый эпистемологический, сильный нормативный и сильный эмоциональный —и показано, как они связаны с инфодемией COVID-19, в чем они проявляются и какие меры можно предпринять для предотвращения их использования для манипуляционного воздействия.This article describes three types of information context of social networks — weak epistemological, strong normative, and strong emotional — and shows how they are related to the COVID‑19 infodemic, how they manifest themselves, and what measures can be taken to prevent their use for manipulative influence
Countering misinformation: Strategies, challenges, and uncertainties
Exemplification research has consistently shown effects of vox pops’ exemplars on audience judgments, whereby people tend to follow the opinion of a few fellow citizens. In this study, we gain some insight into why—and especially for whom—ordinary citizens are such influential “opinion-givers.” Importantly, we look at populist attitudes as a potential moderator for exemplification effects by comparing news reports containing vox pops with purely journalistic news reports providing the same arguments. In a web-based experiment, we show that both perceptual and persuasive effects are moderated by participants’ populist attitudes, and thus, their resonance with the “voice of the people.
- …