12 research outputs found

    What Do Fact Checkers Fact-check When?

    Get PDF
    Recent research suggests that not all fact checking efforts are equal: when and what is fact checked plays a pivotal role in effectively correcting misconceptions. In this paper, we propose a framework to study fact checking efforts using Google Trends, a signal that captures search interest over topics on the world's largest search engine. Our framework consists of extracting claims from fact checking efforts, linking such claims with knowledge graph entities, and estimating the online attention they receive. We use this framework to study a dataset of 879 COVID-19-related fact checks done in 2020 by 81 international organizations. Our findings suggest that there is often a disconnect between online attention and fact checking efforts. For example, in around 40% of countries where 10 or more claims were fact checked, half or more than half of the top 10 most popular claims were not fact checked. Our analysis also shows that claims are first fact checked after receiving, on average, 35% of the total online attention they would eventually receive in 2020. Yet, there is a big variation among claims: some were fact checked before receiving a surge of misinformation-induced online attention, others are fact checked much later. Overall, our work suggests that the incorporation of online attention signals may help organizations better assess and prioritize their fact checking efforts. Also, in the context of international collaboration, where claims are fact checked multiple times across different countries, online attention could help organizations keep track of which claims are "migrating" between different countries

    Making sense of nonsense : Integrated gradient-based input reduction to improve recall for check-worthy claim detection

    Get PDF
    Analysing long text documents of political discourse to identify check-worthy claims (claim detection) is known to be an important task in automated fact-checking systems, as it saves the precious time of fact-checkers, allowing for more fact-checks. However, existing methods use black-box deep neural NLP models to detect check-worthy claims, which limits the understanding of the model and the mistakes they make. The aim of this study is therefore to leverage an explainable neural NLP method to improve the claim detection task. Specifically, we exploit well known integrated gradient-based input reduction on textCNN and BiLSTM to create two different reduced claim data sets from ClaimBuster. We observe that a higher recall in check-worthy claim detection is achieved on the data reduced by BiLSTM compared to the models trained on claims. This is an important remark since the cost of overlooking check-worthy claims is high in claim detection for fact-checking. This is also the case when a pre-trained BERT sequence classification model is fine-tuned on the reduced data set. We argue that removing superfluous tokens using explainable NLP could unlock the true potential of neural language models for claim detection, even though the reduced claims might make no sense to humans. Our findings provide insights on task formulation, design of annotation schema and data set preparation for check-worthy claim detection.publishedVersio

    Misinformation, Believability, and Vaccine Acceptance Over 40 Countries: Takeaways From the Initial Phase of The COVID-19 Infodemic

    Full text link
    The COVID-19 pandemic has been damaging to the lives of people all around the world. Accompanied by the pandemic is an infodemic, an abundant and uncontrolled spreading of potentially harmful misinformation. The infodemic may severely change the pandemic's course by interfering with public health interventions such as wearing masks, social distancing, and vaccination. In particular, the impact of the infodemic on vaccination is critical because it holds the key to reverting to pre-pandemic normalcy. This paper presents findings from a global survey on the extent of worldwide exposure to the COVID-19 infodemic, assesses different populations' susceptibility to false claims, and analyzes its association with vaccine acceptance. Based on responses gathered from over 18,400 individuals from 40 countries, we find a strong association between perceived believability of misinformation and vaccination hesitancy. Additionally, our study shows that only half of the online users exposed to rumors might have seen the fact-checked information. Moreover, depending on the country, between 6% and 37% of individuals considered these rumors believable. Our survey also shows that poorer regions are more susceptible to encountering and believing COVID-19 misinformation. We discuss implications of our findings on public campaigns that proactively spread accurate information to countries that are more susceptible to the infodemic. We also highlight fact-checking platforms' role in better identifying and prioritizing claims that are perceived to be believable and have wide exposure. Our findings give insights into better handling of risk communication during the initial phase of a future pandemic

    Covid-19 Media Discourse and Public Perception The Local Construction of a Social Threat in Cameroon

    Get PDF
    Media discourse has been at the core of Mass Communication a form of human communication practice on how they talk to one another through non-verbal means, which concerns messages transmitted a medium to reach a large number of people (Devito, 2011). Though the World Health Organization no longer considers the COVID-19 pandemic a global health emergency, it does not mean that it is not a global health threat anymore which is why the media in its discourses should prioritize by finding creative ways to communicate about it

    Combining Text Classification and Fact Checking to Detect Fake News

    Get PDF
    Due to the widespread use of fake news in social and news media, it is an emerging research topic gaining attention in today鈥榮 world. In news media and social media, information is spread at high speed but without accuracy, and therefore detection mechanisms should be able to predict news quickly enough to combat the spread of fake news. It has the potential for a negative impact on individuals and society. Therefore, detecting fake news is important and also a technically challenging problem nowadays. The challenge is to use text classification to combat fake news. This includes determining appropriate text classification methods and evaluating how good these methods are at distinguishing between fake and non- fake news. Machine learning is helpful for building Artificial intelligence systems based on tacit knowledge because it can help us solve complex problems based on real-world data. For this reason, I proposed that integrating text classification and fact checking of check-worthy statements can be helpful in detecting fake news. I used text processing and three classifiers such as Passive Aggressive, Nai虉ve Bayes, and Support Vector Machine to classify the news data. Text classification mainly focuses on extracting various features from texts and then incorporating these features into the classification. The big challenge in this area is the lack of an efficient method to distinguish between fake news and non-fake news due to the lack of corpora. I applied three different machine learning classifiers to two publicly available datasets. Experimental analysis based on the available dataset shows very encouraging and improved performance. Simple classification is not quite accurate in detecting fake news because the classification methods are not specialized for fake news. So I added a system that checks the news in depth sentence by sentence. Fact checking is a multi-step process that begins with the extraction of check-worthy statements. Identification of check-worthy statements is a subtask in the fact checking process, the automation of which would reduce the time and effort required to fact check a statement. In this thesis I have proposed an approach that focuses on classifying statements into check-worthy and not check-worthy, while also taking into account the context around a statement. This work shows that inclusion of context in the approach makes a significant contribution to classification, while at the same time using more general features to capture information from sentences. The aim of thischallenge is to propose an approach that automatically identifies check-worthy statements for fact checking, including the context around a statement. The results are analyzed by examining which features contributes more to classification, but also how well the approach performs. For this work, a dataset is created by consulting different fact checking organizations. It contains debates and speeches in the domain of politics. The capability of the approach is evaluated in this domain. The approach starts with extracting sentence and context features from the sentences, and then classifying the sentences based on these features. The feature set and context features are selected after several experiments, based on how well they differentiate check-worthy statements. Fact checking has received increasing attention after the 2016 United States Presidential election; so far that many efforts have been made to develop a viable automated fact checking system. I introduced a web based approach for fact checking that compares the full news text and headline with known facts such as name, location, and place. The challenge is to develop an automated application that takes claims directly from mainstream news media websites and fact checks the news after applying classification and fact checking components. For fact checking a dataset is constructed that contains 2146 news articles labelled fake, non-fake and unverified. I include forty mainstream news media sources to compare the results and also Wikipedia for double verification. This work shows that a combination of text classification and fact checking gives considerable contribution to the detection of fake news, while also using more general features to capture information from sentences

    Uso del modelo de doble diamante para la propuesta de dise帽o de un servicio digital para apoyar el acceso a la informaci贸n en salud sobre cuidados al egreso

    Get PDF
    En el presente trabajo de grado se propone el dise帽o de un servicio de informaci贸n digital a trav茅s de una herramienta tipo web como una forma de facilitar contenidos que permitan el apoyo del cuidador del paciente al egreso de las instalaciones hospitalarias. Lo anterior, parte del hecho que el usuario se da a la apertura de buscar informaci贸n fuera del centro hospitalario o bajo la asesor铆a de un profesional en medicina; ante la diversidad de elecciones surge la problem谩tica de las fuentes de informaci贸n virtuales a las que el cuidador y el paciente tienen acceso. Se produce entonces una cadena de desinformaci贸n motivada por las falsas noticias en salud, mediada por fuentes que carecen de autenticidad en sus contenidos de consulta y porque no cuentan con la participaci贸n de los profesionales encargados en el 谩rea de especialidad. Por lo tanto, se presenta la propuesta de un servicio digital a trav茅s de una herramienta tipo web orientado en el usuario, proporcionando informaci贸n de primera mano, apoyando as铆 al 谩rea sobre los cuidados preventivos y correctivos para tener en cuenta al momento del egreso del paciente.The present work of degree proposes the design of a service of digital information through a tool type Web as a way to facilitate contents that allow the support of the patient鈥檚 caregiver to the discharge of the hospital facilities. The user is given to the opening to seek information outside the hospital center or under the advice of a medical professional; Facing with the diversity of elections arises the problem of the virtual sources of information to which the caregiver and the patient have access. The misinformation, motivated by fake news in health, mediated by sources that lack authenticity in their contents of consultation and also because they do not have the participation of the professionals in charge in the area of specialization. Therefore, the proposal of a digital service is presented through a user-oriented web-type tool, providing first-hand information, supporting the area on preventive and corrective care to take into account at the time of Patient's discharge.Profesional en Ciencia de la Informaci贸n - Bibliotec贸logo (a)Pregrad

    Metodi di identificazione per le Fake News

    Get PDF
    Le Fake News si diffondono tramite social network molto velocemente e possono causare diverse conseguenze negative. Nella tesi si sono analizzate alcune delle tecniche disponibili in letteratura per l'identificazione automatica delle Fake News e si 猫 riportata una metrica, denominata Normalized Winning Number, per eseguire un confronto dei metodi studiati.openEmbargo temporaneo per motivi di segretezza e/o di propriet脿 dei risultati e/o informazioni sensibil
    corecore