16,094 research outputs found
A Dataset of Fact-Checked Images Shared on WhatsApp During the Brazilian and Indian Elections
Recently, messaging applications, such as WhatsApp, have been reportedly
abused by misinformation campaigns, especially in Brazil and India. A notable
form of abuse in WhatsApp relies on several manipulated images and memes
containing all kinds of fake stories. In this work, we performed an extensive
data collection from a large set of WhatsApp publicly accessible groups and
fact-checking agency websites. This paper opens a novel dataset to the research
community containing fact-checked fake images shared through WhatsApp for two
distinct scenarios known for the spread of fake news on the platform: the 2018
Brazilian elections and the 2019 Indian elections.Comment: 7 pages. This is a preprint version of an accepted paper on ICWSM'20.
Please, consider to cite the conference version instead of this on
Characterizing Attention Cascades in WhatsApp Groups
An important political and social phenomena discussed in several countries,
like India and Brazil, is the use of WhatsApp to spread false or misleading
content. However, little is known about the information dissemination process
in WhatsApp groups. Attention affects the dissemination of information in
WhatsApp groups, determining what topics or subjects are more attractive to
participants of a group. In this paper, we characterize and analyze how
attention propagates among the participants of a WhatsApp group. An attention
cascade begins when a user asserts a topic in a message to the group, which
could include written text, photos, or links to articles online. Others then
propagate the information by responding to it. We analyzed attention cascades
in more than 1.7 million messages posted in 120 groups over one year. Our
analysis focused on the structural and temporal evolution of attention cascades
as well as on the behavior of users that participate in them. We found specific
characteristics in cascades associated with groups that discuss political
subjects and false information. For instance, we observe that cascades with
false information tend to be deeper, reach more users, and last longer in
political groups than in non-political groups.Comment: Accepted as a full paper at the 11th International ACM Web Science
Conference (WebSci 2019). Please cite the WebSci versio
The Impact of Social Media on Panic During the COVID-19 Pandemic in Iraqi Kurdistan: Online Questionnaire Study
Background: In the first few months of 2020, information and news reports about the coronavirus disease (COVID-19) were rapidly published and shared on social media and social networking sites. While the field of infodemiology has studied information patterns on the Web and in social media for at least 18 years, the COVID-19 pandemic has been referred to as the first social media infodemic. However, there is limited evidence about whether and how the social media infodemic has spread panic and affected the mental health of social media users.
Objective: The aim of this study is to determine how social media affects self-reported mental health and the spread of panic about COVID-19 in the Kurdistan Region of Iraq.
Methods: To carry out this study, an online questionnaire was prepared and conducted in Iraqi Kurdistan, and a total of 516 social media users were sampled. This study deployed a content analysis method for data analysis. Correspondingly, data were analyzed using SPSS software.
Results: Participants reported that social media has a significant impact on spreading fear and panic related to the COVID-19 outbreak in Iraqi Kurdistan, with a potential negative influence on people’s mental health and psychological well-being. Facebook was the most used social media network for spreading panic about the COVID-19 outbreak in Iraq. We found a significant positive statistical correlation between self-reported social media use and the spread of panic related to COVID-19 (R=.8701). Our results showed that the majority of youths aged 18-35 years are facing psychological anxiety.
Conclusions: During lockdown, people are using social media platforms to gain information about COVID-19. The nature of the impact of social media panic among people varies depending on an individual's gender, age, and level of education. Social media has played a key role in spreading anxiety about the COVID-19 outbreak in Iraqi Kurdistan
Analyzing the Digital Traces of Political Manipulation: The 2016 Russian Interference Twitter Campaign
Until recently, social media was seen to promote democratic discourse on
social and political issues. However, this powerful communication platform has
come under scrutiny for allowing hostile actors to exploit online discussions
in an attempt to manipulate public opinion. A case in point is the ongoing U.S.
Congress' investigation of Russian interference in the 2016 U.S. election
campaign, with Russia accused of using trolls (malicious accounts created to
manipulate) and bots to spread misinformation and politically biased
information. In this study, we explore the effects of this manipulation
campaign, taking a closer look at users who re-shared the posts produced on
Twitter by the Russian troll accounts publicly disclosed by U.S. Congress
investigation. We collected a dataset with over 43 million election-related
posts shared on Twitter between September 16 and October 21, 2016, by about 5.7
million distinct users. This dataset included accounts associated with the
identified Russian trolls. We use label propagation to infer the ideology of
all users based on the news sources they shared. This method enables us to
classify a large number of users as liberal or conservative with precision and
recall above 90%. Conservatives retweeted Russian trolls about 31 times more
often than liberals and produced 36x more tweets. Additionally, most retweets
of troll content originated from two Southern states: Tennessee and Texas.
Using state-of-the-art bot detection techniques, we estimated that about 4.9%
and 6.2% of liberal and conservative users respectively were bots. Text
analysis on the content shared by trolls reveals that they had a mostly
conservative, pro-Trump agenda. Although an ideologically broad swath of
Twitter users was exposed to Russian Trolls in the period leading up to the
2016 U.S. Presidential election, it was mainly conservatives who helped amplify
their message
Social determinants of content selection in the age of (mis)information
Despite the enthusiastic rhetoric about the so called \emph{collective
intelligence}, conspiracy theories -- e.g. global warming induced by chemtrails
or the link between vaccines and autism -- find on the Web a natural medium for
their dissemination. Users preferentially consume information according to
their system of beliefs and the strife within users of opposite narratives may
result in heated debates. In this work we provide a genuine example of
information consumption from a sample of 1.2 million of Facebook Italian users.
We show by means of a thorough quantitative analysis that information
supporting different worldviews -- i.e. scientific and conspiracist news -- are
consumed in a comparable way by their respective users. Moreover, we measure
the effect of the exposure to 4709 evidently false information (satirical
version of conspiracy theses) and to 4502 debunking memes (information aiming
at contrasting unsubstantiated rumors) of the most polarized users of
conspiracy claims. We find that either contrasting or teasing consumers of
conspiracy narratives increases their probability to interact again with
unsubstantiated rumors.Comment: misinformation, collective narratives, crowd dynamics, information
spreadin
An Exploratory Study of COVID-19 Misinformation on Twitter
During the COVID-19 pandemic, social media has become a home ground for
misinformation. To tackle this infodemic, scientific oversight, as well as a
better understanding by practitioners in crisis management, is needed. We have
conducted an exploratory study into the propagation, authors and content of
misinformation on Twitter around the topic of COVID-19 in order to gain early
insights. We have collected all tweets mentioned in the verdicts of
fact-checked claims related to COVID-19 by over 92 professional fact-checking
organisations between January and mid-July 2020 and share this corpus with the
community. This resulted in 1 500 tweets relating to 1 274 false and 276
partially false claims, respectively. Exploratory analysis of author accounts
revealed that the verified twitter handle(including Organisation/celebrity) are
also involved in either creating (new tweets) or spreading (retweet) the
misinformation. Additionally, we found that false claims propagate faster than
partially false claims. Compare to a background corpus of COVID-19 tweets,
tweets with misinformation are more often concerned with discrediting other
information on social media. Authors use less tentative language and appear to
be more driven by concerns of potential harm to others. Our results enable us
to suggest gaps in the current scientific coverage of the topic as well as
propose actions for authorities and social media users to counter
misinformation.Comment: 20 pages, nine figures, four tables. Submitted for peer review,
revision
Network segregation in a model of misinformation and fact checking
Misinformation under the form of rumor, hoaxes, and conspiracy theories
spreads on social media at alarming rates. One hypothesis is that, since social
media are shaped by homophily, belief in misinformation may be more likely to
thrive on those social circles that are segregated from the rest of the
network. One possible antidote is fact checking which, in some cases, is known
to stop rumors from spreading further. However, fact checking may also backfire
and reinforce the belief in a hoax. Here we take into account the combination
of network segregation, finite memory and attention, and fact-checking efforts.
We consider a compartmental model of two interacting epidemic processes over a
network that is segregated between gullible and skeptic users. Extensive
simulation and mean-field analysis show that a more segregated network
facilitates the spread of a hoax only at low forgetting rates, but has no
effect when agents forget at faster rates. This finding may inform the
development of mitigation techniques and overall inform on the risks of
uncontrolled misinformation online
- …