102 research outputs found

    Analisis kestabilan model penyebaran rumor dengan mempertimbangakan sikap debunking dalam keadaan darurat

    Get PDF
    Penelitian ini membahas analisis kestabilan model penyebaran rumor dengan mempertimbangkan sikap debunking dalam keadaan darurat. Model yang digunakan diambil dari jurnal karya Yong Tian dan Xuejun Ding yang berjudul “Rumor spreading model with considering debunking behavior in emergencies”. Seluruh populasi dikelompokkan menjadi lima kelas, yaitu ignorant (I), latents (L), rumor spreader (R), debunkers (D), dan stiflers (S). Dimana, (I) adalah individu yang tidak mengetahui rumor, (L) adalah individu yang telah terpapar rumor, (R) adalah individu yang menyebarkan rumor, (D) adalah individu yang menyanggah rumor, dan (S) adalah individu yang tidak lagi meyebarkan rumor. Tujuan dari penelitian ini untuk memberikan gambaran dari mekanisme penyebaran rumor sehingga dapat dilakukan upaya pengendalian sekaligus memberikan pemahaman tentang proses penyebaran rumor dalam keadaan darurat. Pada model yang digunakan akan dilakukan tahapan analisis dinamik berupa penentuan titik kesetimbangan rumor dan analisis kestabilan titik kesetimbangan rumor, serta untuk mendukung hasil analitik akan dilakukan simulasi numerik. Metode yang digunakan dalam menentukan analisis kestabilan adalah studi pustaka. Hasil analisa menunjukkan bahwa model memiliki dua titik kesetimbangan, yaitu titik kesetimbangan bebas rumor E^0=(ε/ρ,0,0,0,0) dan titik kesetimbangan penyebaran rumor E^*=(((α+β+γ+ρ)(δ+ξ+ρ)(θ+ρ))/(μα(θ+ρ)+kβ(δ+ξ+ρ)+kξα) ,(ε-ρI)/(α+β+γ+ρ ) ,αL/(δ+ξ+ρ),(β(δ+ξ+ρ)+ξα)/(δ+ξ+ρ)(θ+ρ) ∙L ,(δR+θD+γL)/ρ). Titik kesetimbangan yang diperoleh akan digunakan untuk mencari angka reproduksi dasar (R_0). Analisis kestabilan pada keadaan bebas rumor akan stabil asimtotik lokal ketika R_01, artinya terdapat rumor yang menyebar dalam populasi. Selanjutnya, untuk mendukung hasil analisis dinamik dilakukan simulasi numerik pada model tersebut menggunakan software Matlab R2011a

    Official long-term and short-term strategies for preventing the spread of rumors

    Get PDF
    Recently, public security incidents caused by rumor spreading have frequently occurred, leading to public panic, social chaos and even casualties. Therefore, how governments establish strategies to restrain rumor spreading is important for judging their governance capacity. Herein, we consider one long-term strategy (education) and two short-term strategies (isolation and debunking) for officials to intervene in rumor spreading. To investigate the effects of these strategies, an improved rumor-spreading model and a series of mean-field equations are proposed. Through theoretical analysis, the effective thresholds of three rumor-prevention strategies are obtained, respectively. Finally, through simulation analysis, the effectiveness of these strategies in preventing rumor spreading is investigated. The results indicate that long-term and short-term strategies are effective in suppressing rumor spreading. The greater the efforts of governments to suppress rumors, the smaller the final rumor size. The study also shows that the three strategies are the best when applied simultaneously. The government can adopt corresponding measures to suppress rumor spreading effectively

    The Web of False Information: Rumors, Fake News, Hoaxes, Clickbait, and Various Other Shenanigans

    Full text link
    A new era of Information Warfare has arrived. Various actors, including state-sponsored ones, are weaponizing information on Online Social Networks to run false information campaigns with targeted manipulation of public opinion on specific topics. These false information campaigns can have dire consequences to the public: mutating their opinions and actions, especially with respect to critical world events like major elections. Evidently, the problem of false information on the Web is a crucial one, and needs increased public awareness, as well as immediate attention from law enforcement agencies, public institutions, and in particular, the research community. In this paper, we make a step in this direction by providing a typology of the Web's false information ecosystem, comprising various types of false information, actors, and their motives. We report a comprehensive overview of existing research on the false information ecosystem by identifying several lines of work: 1) how the public perceives false information; 2) understanding the propagation of false information; 3) detecting and containing false information on the Web; and 4) false information on the political stage. In this work, we pay particular attention to political false information as: 1) it can have dire consequences to the community (e.g., when election results are mutated) and 2) previous work show that this type of false information propagates faster and further when compared to other types of false information. Finally, for each of these lines of work, we report several future research directions that can help us better understand and mitigate the emerging problem of false information dissemination on the Web

    What Motivates People to Share Online Rumors? Deconstructing the Ambiguity of Rumors from a Perspective of Digital Storytelling

    Get PDF
    With the proliferation of social networks and the development of digital technology, the content structure and propagation mode of rumors have become more complicated with ambiguity, which has greatly influenced people’s behaviors when facing digitalized rumors. Based on the digital storytelling theory, this study takes an early initiative by deconstructing and identifying the basic components of online rumors and revealing the conditions under which people’s sharing behaviors in a social environment. A data set of health-related rumors related to Covid-19 was used to test the research hypotheses. The results indicated that causality explicitness, element integrality and source explicitness have different influences on rumor sharing behavior. And rumor vividness plays a negative moderating effect during the sharing process. This research offers insight to viewers and website authorities on ways to monitor and debunk online rumors

    Health Misinformation in Search and Social Media

    Get PDF
    People increasingly rely on the Internet in order to search for and share health-related information. Indeed, searching for and sharing information about medical treatments are among the most frequent uses of online data. While this is a convenient and fast method to collect information, online sources may contain incorrect information that has the potential to cause harm, especially if people believe what they read without further research or professional medical advice. The goal of this thesis is to address the misinformation problem in two of the most commonly used online services: search engines and social media platforms. We examined how people use these platforms to search for and share health information. To achieve this, we designed controlled laboratory user studies and employed large-scale social media data analysis tools. The solutions proposed in this thesis can be used to build systems that better support people's health-related decisions. The techniques described in this thesis addressed online searching and social media sharing in the following manner. First, with respect to search engines, we aimed to determine the extent to which people can be influenced by search engine results when trying to learn about the efficacy of various medical treatments. We conducted a controlled laboratory study wherein we biased the search results towards either correct or incorrect information. We then asked participants to determine the efficacy of different medical treatments. Results showed that people were significantly influenced both positively and negatively by search results bias. More importantly, when the subjects were exposed to incorrect information, they made more incorrect decisions than when they had no interaction with the search results. Following from this work, we extended the study to gain insights into strategies people use during this decision-making process, via the think-aloud method. We found that, even with verbalization, people were strongly influenced by the search results bias. We also noted that people paid attention to what the majority states, authoritativeness, and content quality when evaluating online content. Understanding the effects of cognitive biases that can arise during online search is a complex undertaking because of the presence of unconscious biases (such as the search results ranking) that the think-aloud method fails to show. Moving to social media, we first proposed a solution to detect and track misinformation in social media. Using Zika as a case study, we developed a tool for tracking misinformation on Twitter. We collected 13 million tweets regarding the Zika outbreak and tracked rumors outlined by the World Health Organization and the Snopes fact-checking website. We incorporated health professionals, crowdsourcing, and machine learning to capture health-related rumors as well as clarification communications. In this way, we illustrated insights that the proposed tools provide into potentially harmful information on social media, allowing public health researchers and practitioners to respond with targeted and timely action. From identifying rumor-bearing tweets, we examined individuals on social media who are posting questionable health-related information, in particular those promoting cancer treatments that have been shown to be ineffective. Specifically, we studied 4,212 Twitter users who have posted about one of 139 ineffective ``treatments'' and compared them to a baseline of users generally interested in cancer. Considering features that capture user attributes, writing style, and sentiment, we built a classifier that is able to identify users prone to propagating such misinformation. This classifier achieved an accuracy of over 90%, providing a potential tool for public health officials to identify such individuals for preventive intervention

    Contagious (mis)communication : the role of risk communication and misinformation in infectious disease outbreaks

    Get PDF
    Background: The largest outbreak of Ebola virus disease in history happened between 2014- 2016 in West Africa. In Sierra Leone, one of the three most affected countries, more than 14,000 people got infected and almost 4,000 died. Risk communication and social mobilization efforts aimed to engage with the public and to educate people to prevent further transmission of the virus. Not much is known how being exposed to this type of information influenced people’s knowledge, behaviors and risk perception around Ebola. Misinformation in the Ebola outbreak was widespread, but effective methods to counter real-life infectious disease misinformation have not been studied in a low-income setting. Aims: To determine the role of risk communication in the Ebola outbreak in Sierra Leone and to test the effectiveness of methods to debunk misinformation about infectious diseases. Methods: Four nationwide cross-sectional surveys among the general population of Sierra Leone were carried out at different timepoints of the Ebola outbreak (August 2014, n=1,413; October 2014, n=2,087; December 2014, n=3,540; July 2015, n=3,564). The four surveys were pooled (n=10,604) and the associations between exposure to various information sources and Ebola-specific knowledge, misconceptions, protective behavior and risk behavior were assessed using multilevel modeling. The associations between exposure to information sources and the perceived susceptibility to Ebola (i.e. risk perception), as well as the associations between Ebola-specific knowledge, misconceptions, behaviors and risk perception were measured in the pooled sample of the first three surveys (n=7,039). Qualitative, semi-structured interviews were conducted with 13 Sierra Leonean journalists who reported during the outbreak. Using thematic analysis, their perceived roles were mapped. After the epidemic, a three-arm, prospective, randomized controlled trial (RCT) (n=736) was carried out among adults in Freetown who were in possession of a smartphone with WhatsApp, to test whether 4-episode audio drama interventions could reduce the belief in typhoid-related misinformation. Results: Exposure to information sources was associated with increased Ebola-specific knowledge and protective behavior, but also - to a smaller extent - with misconceptions and risk behavior. Exposure to new media (e.g. mobile phones, internet) and community sources (e.g. religious/traditional leaders) as well as having Ebola-specific knowledge and engaging in frequent hand washing, was associated with increased risk perception. Having Ebolaspecific misconceptions and avoiding burials on the other hand, was associated with lower risk perception (Adjusted Odds Ratio (AOR) 0.7, 95% Confidence Interval (CI) 0.6-0.8 and AOR 0.8, 95% CI 0.6-1.0, respectively). Sierra Leonean journalists adopted various roles over the course of the outbreak; from being skeptical about the existence of an outbreak, to being eye-witnesses themselves. Through training about the virus, they later turned into public mobilizers and instructors, stepping away from their journalistic independence. Results from the RCT showed that the audio drama interventions significantly reduced the belief in typhoid-related misinformation compared to the control group. In Intervention Group A (in which the audio dramas actively engaged with the misinformation) and in Intervention Group 5B (where only the correct information was given), the belief that typhoid always co-occurs with malaria was significantly reduced (Intervention Group A: AOR 0.3, 95% CI 0.2-0.5, Intervention Group B: AOR 0.6, 95% CI 0.4-0.8). Actively engaging with the misinformation, instead of only focusing on the correct information, resulted in the largest reductions in belief in misinformation. Conclusions: The associations between information sources and knowledge, misconceptions and behaviors show the need to have clear, transparent and contextualized information available during the entire course of an epidemic. The mixed findings regarding risk perception and various protective behaviors likely point to the complex interplay between behavior and risk perception, whereby adopting a behavior has an effect on how the risk of disease is perceived. As trusted, community-based sources, local journalists can be vital partners in an outbreak response. Making use of trusted sources is also one of the elements that will likely increase the chances of successfully countering real-life misinformation. Other elements include ensuring that the corrective information is in line with worldviews and repeated exposure to the information. A strategy which actively engages with the misinformation is likely to be more successful in debunking than merely focusing on the correct information. Taken together, the studies show that risk communication and misinformation management should be key pillars in health emergency response and preparedness and should be rooted in communities

    Internet users beware, you follow online health rumors (more than counter-rumors) irrespective of risk propensity and prior endorsement

    Get PDF
    Purpose—The Internet is a breeding ground for rumors. A way to tackle the problem involves the use of counter-rumors—messages that refute rumors. This paper analyzes users’ intention to follow rumors and counter-rumors as a function of two factors: individuals’ risk propensity and messages’ prior endorsement. Design/methodology/approach—The paper conducted an online experiment. Complete responses from 134 participants were analyzed statistically. Findings—Risk-seeking users were keener to follow counter-rumors compared with risk-averse ones. No difference was detected in terms of their intention to follow rumors. Users’ intention to follow rumors always exceeded their intention to follow counter-rumors regardless of whether prior endorsement was low or high. Research limitations/implications—This paper contributes to the scholarly understanding of people’s behavioral responses when unbeknownstly exposed to rumors and counter-rumors on the Internet. Moreover, it dovetails the literature by examining how risk-averse and risk-seeking individuals differ in terms of intention to follow rumors and counter-rumors. It also shows how prior endorsement of such messages drives their likelihood to be followed. Originality/value—The paper explores the hitherto elusive question: When users are unbeknownstly exposed to both a rumor and its counter-rumor, which entry is likely to be followed more than the other? It also takes into consideration the roles played by individuals’ risk propensity and messages’ prior endorsement

    Disinformation and Fact-Checking in Contemporary Society

    Get PDF
    Funded by the European Media and Information Fund and research project PID2022-142755OB-I00

    Combating Misinformation on Social Media by Exploiting Post and User-level Information

    Get PDF
    Misinformation on social media has far-reaching negative impact on the public and society. Given the large number of real-time posts on social media, traditional manual-based methods of misinformation detection are not viable. Therefore, computational approaches (i.e., data-driven) have been proposed to combat online misinformation. Previous work on computational misinformation analysis has mainly focused on employing natural language processing (NLP) techniques to develop misinformation detection systems at the post level (e.g., using text and propagation network). However, it is also important to exploit information at the user level in social media, as users play a significant role (e.g., post, diffuse, refute, etc.) in spreading misinformation. The main aim of this thesis is to: (i) develop novel methods for analysing the behaviour of users who are likely to share or refute misinformation in social media; and (ii) predict and characterise unreliable stories with high popularity in social media. To this end, we first highlight the limitations in the evaluation protocol in popular rumour detection benchmarks on the post level and propose to evaluate such systems using chronological splits (i.e., considering temporal concept drift). On the user level, we introduce two novel tasks on (i) early detecting Twitter users that are likely to share misinformation before they actually do it; and (ii) identifying and characterising active citizens who refute misinformation in social media. Finally, we develop a new dataset to enable the study on predicting the future popularity (e.g. number of likes, replies, retweets) of false rumour on Weibo
    corecore