5,212 research outputs found
An Exploratory Study of COVID-19 Misinformation on Twitter
During the COVID-19 pandemic, social media has become a home ground for
misinformation. To tackle this infodemic, scientific oversight, as well as a
better understanding by practitioners in crisis management, is needed. We have
conducted an exploratory study into the propagation, authors and content of
misinformation on Twitter around the topic of COVID-19 in order to gain early
insights. We have collected all tweets mentioned in the verdicts of
fact-checked claims related to COVID-19 by over 92 professional fact-checking
organisations between January and mid-July 2020 and share this corpus with the
community. This resulted in 1 500 tweets relating to 1 274 false and 276
partially false claims, respectively. Exploratory analysis of author accounts
revealed that the verified twitter handle(including Organisation/celebrity) are
also involved in either creating (new tweets) or spreading (retweet) the
misinformation. Additionally, we found that false claims propagate faster than
partially false claims. Compare to a background corpus of COVID-19 tweets,
tweets with misinformation are more often concerned with discrediting other
information on social media. Authors use less tentative language and appear to
be more driven by concerns of potential harm to others. Our results enable us
to suggest gaps in the current scientific coverage of the topic as well as
propose actions for authorities and social media users to counter
misinformation.Comment: 20 pages, nine figures, four tables. Submitted for peer review,
revision
Social media mining under the COVID-19 context: Progress, challenges, and opportunities
Social media platforms allow users worldwide to create and share information, forging vast sensing networks that
allow information on certain topics to be collected, stored, mined, and analyzed in a rapid manner. During the
COVID-19 pandemic, extensive social media mining efforts have been undertaken to tackle COVID-19 challenges
from various perspectives. This review summarizes the progress of social media data mining studies in the
COVID-19 contexts and categorizes them into six major domains, including early warning and detection, human
mobility monitoring, communication and information conveying, public attitudes and emotions, infodemic and
misinformation, and hatred and violence. We further document essential features of publicly available COVID-19
related social media data archives that will benefit research communities in conducting replicable and repro�ducible studies. In addition, we discuss seven challenges in social media analytics associated with their potential
impacts on derived COVID-19 findings, followed by our visions for the possible paths forward in regard to social
media-based COVID-19 investigations. This review serves as a valuable reference that recaps social media mining
efforts in COVID-19 related studies and provides future directions along which the information harnessed from
social media can be used to address public health emergencies
Identifying Dis/Misinformation on Social Media: A Policy Report for the Diplomacy Lab Strategies for Identifying Mis/Disinformation Project
Dis/misinformation was a major concern in the 2016 U.S. presidential election and has only worsened in recent years. Even though domestic actors often spread dis/misinformation, actors abroad can use it to spread confusion and push their agenda to the detriment of American citizens. Even though this report focuses on actors outside the United States, the methods they use are universal and can be adapted to work against domestic agents. A solid understanding of these methods is the first step in combating foreign dis/misinformation campaigns and creating a new information literacy paradigm.
This report highlights the primary mechanisms of dis/misinformation: multimedia manipulation, bots, astroturfing, and trolling. These forms of dis/misinformation were selected after thorough research about common pathways dis/misinformation are spread online. Multimedia manipulation details image, video, and audio dis/misinformation in the form of deepfakes, memes, and out-of-context images. Bots are automated social media accounts that are not managed by humans and often contribute to dis/misinformation campaigns. Astroturfing and trolls use deception to sway media users to join false grassroots campaigns and utilize emotionally charged posts to provoke a response from users.
This policy report also defines case studies of disinformation in China, Russia, and Iran, outlining common patterns of dis/misinformation specific to these countries. These patterns will allow for more accurate and quick identification of dis/misinformation from the outlined countries by State Department Watch Officers. Recommendations have also been provided for each type of disinformation and include a list of what individuals should look for and how to make sure that the information they receive is accurate and from a reputable source. The addendum at the end of the paper lists all of the recommendations in one place so that individuals do not have to search the paper for the recommendation they are looking for.
This report intends to aid State Department Watch Officers as they work to identify foreign developments accurately. Still, researchers may find this information useful in anticipating future developments in foreign dis/misinformation campaigns
Recommended from our members
Conspiracy in the Time of Corona: Automatic detection of Emerging Covid-19 Conspiracy Theories in Social Media and the News
Abstract
Rumors and conspiracy theories thrive in environments of low confi- dence and low trust. Consequently, it is not surprising that ones related to the Covid-19 pandemic are proliferating given the lack of scientific consensus on the virus’s spread and containment, or on the long term social and economic ramifications of the pandemic. Among the stories currently circulating are ones suggesting that the 5G telecommunication network activates the virus, that the pandemic is a hoax perpetrated by a global cabal, that the virus is a bio-weapon released deliberately by the Chinese, or that Bill Gates is using it as cover to launch a broad vaccination program to facilitate a global surveillance regime. While some may be quick to dismiss these stories as having little impact on real-world behavior, recent events including the destruction of cell phone towers, racially fueled attacks against Asian Americans, demonstrations espousing resistance to public health orders, and wide-scale defiance of scientifically sound public mandates such as those to wear masks and practice social distancing, countermand such conclusions. Inspired by narrative theory, we crawl social media sites and news reports and, through the application of automated machine-learning methods, discover the underlying narrative frame- works supporting the generation of rumors and conspiracy theories. We show how the various narrative frameworks fueling these stories rely on the alignment of otherwise disparate domains of knowledge, and consider how they attach to the broader reporting on the pandemic. These alignments and attachments, which can be monitored in near real-time, may be useful for identifying areas in the news that are particularly vulnerable to reinterpretation by conspiracy theorists. Understanding the dynamics of storytelling on social media and the narrative frameworks that provide the generative basis for these stories may also be helpful for devising methods to disrupt their spread
- …