5,212 research outputs found

    An Exploratory Study of COVID-19 Misinformation on Twitter

    Get PDF
    During the COVID-19 pandemic, social media has become a home ground for misinformation. To tackle this infodemic, scientific oversight, as well as a better understanding by practitioners in crisis management, is needed. We have conducted an exploratory study into the propagation, authors and content of misinformation on Twitter around the topic of COVID-19 in order to gain early insights. We have collected all tweets mentioned in the verdicts of fact-checked claims related to COVID-19 by over 92 professional fact-checking organisations between January and mid-July 2020 and share this corpus with the community. This resulted in 1 500 tweets relating to 1 274 false and 276 partially false claims, respectively. Exploratory analysis of author accounts revealed that the verified twitter handle(including Organisation/celebrity) are also involved in either creating (new tweets) or spreading (retweet) the misinformation. Additionally, we found that false claims propagate faster than partially false claims. Compare to a background corpus of COVID-19 tweets, tweets with misinformation are more often concerned with discrediting other information on social media. Authors use less tentative language and appear to be more driven by concerns of potential harm to others. Our results enable us to suggest gaps in the current scientific coverage of the topic as well as propose actions for authorities and social media users to counter misinformation.Comment: 20 pages, nine figures, four tables. Submitted for peer review, revision

    Social media mining under the COVID-19 context: Progress, challenges, and opportunities

    Full text link
    Social media platforms allow users worldwide to create and share information, forging vast sensing networks that allow information on certain topics to be collected, stored, mined, and analyzed in a rapid manner. During the COVID-19 pandemic, extensive social media mining efforts have been undertaken to tackle COVID-19 challenges from various perspectives. This review summarizes the progress of social media data mining studies in the COVID-19 contexts and categorizes them into six major domains, including early warning and detection, human mobility monitoring, communication and information conveying, public attitudes and emotions, infodemic and misinformation, and hatred and violence. We further document essential features of publicly available COVID-19 related social media data archives that will benefit research communities in conducting replicable and repro�ducible studies. In addition, we discuss seven challenges in social media analytics associated with their potential impacts on derived COVID-19 findings, followed by our visions for the possible paths forward in regard to social media-based COVID-19 investigations. This review serves as a valuable reference that recaps social media mining efforts in COVID-19 related studies and provides future directions along which the information harnessed from social media can be used to address public health emergencies

    Identifying Dis/Misinformation on Social Media: A Policy Report for the Diplomacy Lab Strategies for Identifying Mis/Disinformation Project

    Get PDF
    Dis/misinformation was a major concern in the 2016 U.S. presidential election and has only worsened in recent years. Even though domestic actors often spread dis/misinformation, actors abroad can use it to spread confusion and push their agenda to the detriment of American citizens. Even though this report focuses on actors outside the United States, the methods they use are universal and can be adapted to work against domestic agents. A solid understanding of these methods is the first step in combating foreign dis/misinformation campaigns and creating a new information literacy paradigm. This report highlights the primary mechanisms of dis/misinformation: multimedia manipulation, bots, astroturfing, and trolling. These forms of dis/misinformation were selected after thorough research about common pathways dis/misinformation are spread online. Multimedia manipulation details image, video, and audio dis/misinformation in the form of deepfakes, memes, and out-of-context images. Bots are automated social media accounts that are not managed by humans and often contribute to dis/misinformation campaigns. Astroturfing and trolls use deception to sway media users to join false grassroots campaigns and utilize emotionally charged posts to provoke a response from users. This policy report also defines case studies of disinformation in China, Russia, and Iran, outlining common patterns of dis/misinformation specific to these countries. These patterns will allow for more accurate and quick identification of dis/misinformation from the outlined countries by State Department Watch Officers. Recommendations have also been provided for each type of disinformation and include a list of what individuals should look for and how to make sure that the information they receive is accurate and from a reputable source. The addendum at the end of the paper lists all of the recommendations in one place so that individuals do not have to search the paper for the recommendation they are looking for. This report intends to aid State Department Watch Officers as they work to identify foreign developments accurately. Still, researchers may find this information useful in anticipating future developments in foreign dis/misinformation campaigns
    • …
    corecore