46,373 research outputs found

    Fake news ed educazione per una cittadinanza illuminata

    Get PDF
    The Fake News’ topic is interesting because it is new and it is old at the same time. If we want to define “fake news” we cannot give it a unique meaning. Someone frames fake news only in the social media area, some others extend the meaning to newspapers, someone else to any kind of mass media (TV, Books, Radio, etc.). But it is sure that our students don’t know how to detect them and how to fight them. The extent of perceived realism of fake news depends also on the extent of hard-news exposure, with the result that individuals exposed to both hard and fake news find fake-news messages less realistic. So we also need to teach our students how become hard-news reader in order to know and detect fake news. In this article we see the “status” of the Fake News problem, how some national and international laws are going on, and some exercises that we can do with our students in order to be “more democratic”.L’argomento Fake News Ăš interessante perchĂ© Ăš nuovo e vecchio allo stesso tempo. Potremmo tentare di definire il termine “Fake News” senza perĂČ dargli un significato unico. Qualcuno infatti incornicia le Fake News solo nel settore dei social media, altri estendono il significato anche ai giornali, qualcun altro a qualsiasi tipo di mass media (TV, libri, radio, ecc.). È sicuro perĂČ che i nostri studenti non sanno nĂ© come rilevarle nĂ© come combatterle. L’entitĂ  del realismo percepito delle Fake News dipende anche dal grado di esposizione dei fruitori alle hard-news, infatti gli individui esposti sia alle Hard che alle Soft news trovano i messaggi delle Fake News meno realistiche. PerciĂČ abbiamo anche bisogno di insegnare ai nostri studenti come diventare lettori critici delle news in generale, al fine di conoscere e rilevare le notizie false. In questo articolo presentiamo lo “status” del problema delle Fake News, come alcune leggi nazionali e internazionali siano in corso, e alcuni esercizi che possiamo fare con i nostri studenti al fine di diventare “piĂč democratici”

    Believing Journalists, AI, or Fake News: The Role of Trust in Media

    Get PDF
    An increasing amount of news is generated automatically by artificial intelligence (AI). While the technology has advantages for content production, e.g., regarding efficiency in aggregating information, it is also viewed critically due to little transparency in obtaining results and possible biases. As news media are dependent on trust and credibility, introducing AI to facilitate mass communication with consumers seems to be a risky endeavor. We expand research on consumer perception of AI-based news by comparing machine-written and human-written texts to fake news and by examining the role of trust that consumers exhibit when evaluating news. Through an experiment with 263 participants, we find that consumers judge AI-based texts similar to true journalistic content when it comes to credibility, but similar to fake news regarding readability. Furthermore, our results indicate that consumers with low trust in media are less averse to AI-based texts than consumers with high trust in media

    Machine Learning Explanations to Prevent Overtrust in Fake News Detection

    Full text link
    Combating fake news and misinformation propagation is a challenging task in the post-truth era. News feed and search algorithms could potentially lead to unintentional large-scale propagation of false and fabricated information with users being exposed to algorithmically selected false content. Our research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news. We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms to study the effects of algorithmic transparency on end-users. We present evaluation results and analysis from multiple controlled crowdsourced studies. For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining. The study results indicate that explanations helped participants to build appropriate mental models of the intelligent assistants in different conditions and adjust their trust accordingly for model limitations

    Implementing BERT and fine-tuned RobertA to detect AI generated news by ChatGPT

    Full text link
    The abundance of information on social media has increased the necessity of accurate real-time rumour detection. Manual techniques of identifying and verifying fake news generated by AI tools are impracticable and time-consuming given the enormous volume of information generated every day. This has sparked an increase in interest in creating automated systems to find fake news on the Internet. The studies in this research demonstrate that the BERT and RobertA models with fine-tuning had the best success in detecting AI generated news. With a score of 98%, tweaked RobertA in particular showed excellent precision. In conclusion, this study has shown that neural networks can be used to identify bogus news AI generation news created by ChatGPT. The RobertA and BERT models' excellent performance indicates that these models can play a critical role in the fight against misinformation

    Impact of Deepfake Technology on Digital World Authenticity: A Review

    Get PDF
    Deep fake technology is an emerging technology that creates fake videos by using artificial intelligence (AI) with the facial expression and lips sing effect. Deep fake technology is widely used in different scenarios with different objectives. Deep fake technology is used to make a highly realistic fake video that can be widely used to spread the wrong information or fake news by regarding any celebrity or political leader which is not created by them. Due to the high impact of social media, these fake videos can reach millions of views within an hour and create a negative impact on our society. This technology can be used by criminals to threaten society by making such deep fake (AI) videos. The results suggest that deepfakes are a threat to our celebrities, political system, religious beliefs, and business, they can be controlled by rules and regulations, strict corporate policy and awareness, education, and training to the common internet users. We need to develop a technology that can examine such types of video and be able to differentiate between real and fake video. Government agency also needs to create some policy to regulate such technology so that monitoring and controlling the use of this AI technology can be managed

    Detecting fake news and disinformation using artificial intelligence and machine learning to avoid supply chain disruptions

    Get PDF
    Fake news and disinformation (FNaD) are increasingly being circulated through various online and social networking platforms, causing widespread disruptions and influencing decision-making perceptions. Despite the growing importance of detecting fake news in politics, relatively limited research efforts have been made to develop artificial intelligence (AI) and machine learning (ML) oriented FNaD detection models suited to minimize supply chain disruptions (SCDs). Using a combination of AI and ML, and case studies based on data collected from Indonesia, Malaysia, and Pakistan, we developed a FNaD detection model aimed at preventing SCDs. This model based on multiple data sources has shown evidence of its effectiveness in managerial decision-making. Our study further contributes to the supply chain and AI-ML literature, provides practical insights, and points to future research directions

    ALGORITHMIC SOLUTIONS TO COMBAT ONLINE FAKE NEWS

    Get PDF
    The unprecedented growth of new information producing, distributing, and consuming every moment on the Web has fostered the rise of ``fake news.\u27\u27 Because of its detrimental effect on democracy, global economies, and public health, effectively combating online fake news has become an essential and urgent task. This dissertation starts with making typological, theoretical, and empirical efforts to promote the public\u27s comprehension of fake news and lay the foundation for algorithmically combating fake news. As there has been no universal definition of fake news, this dissertation discusses the definition of fake news from three dimensions: veracity, intention, and news, comparing it with related terms, such as misinformation and disinformation. The dissertation first probes and collects extensive theories in social sciences, presenting or interpreting the psychology, behavior, and motivations of human beings as fake news producers, distributors, and consumers. It creates real-world multimodal, multilingual, and cross-site datasets, with which the dissertation empirically characterizes the language of fake news and its propagation on social networks differential from the truth. Beyond understanding fake news, this dissertation presents novel machine (deep) learning algorithms for accurate, explainable, early, and robust prediction of fake news. It first introduces social theories and empirical patterns of fake news into feature extraction. It designs the neural networks that explicitly and adaptively capture the linguistic style of various news articles (i.e., the usage of words and the linguistically meaningful way they are structured into documents). It first leverages multimodal news content and cross-modal consistency to predict fake news. The proposed algorithms comprehensively investigate news language across the lexical, syntactic, semantic, and discourse levels, the visual information within news content, and news diffusion on social networks across the node, ego, triad, community, and network levels. Their effectiveness in predicting fake news is demonstrated with real-world datasets publicly available. Furthermore, this dissertation strives for proactive fake news mitigation, considering that predicting fake news can be effective but reactive in countering online fake news. It formulates a new task of assessing the intent of fake news spreaders to keep social media users from unintentionally circulating any future fake news without realizing its fakeness. It proposes a social-theory-informed AI-powered solution. Specifically, social theories interpret why a human unintentionally spreads fake news (i.e., preexisting beliefs and social influence). Advanced AI (artificial intelligence) techniques are employed to compute one\u27s beliefs and received social influence. It first annotates the intent of fake news spreaders as ground truth, with which we demonstrate the proposed solution\u27s effectiveness

    On the feasibility of predicting volumes of fake news- the Spanish case

    Get PDF
    The growing amount of news shared on the Internet makes it hard to verify them in real-time. Malicious actors take advantage of this situation by spreading fake news to impact society through misinformation. An estimation of future fake news would help to focus the detection and verification efforts. Unfortunately, no previous work has addressed this issue yet. Therefore, this work measures the feasibility of predicting the volume of future fake news in a particular context—Spanish contents related to Spain. The approach involves different artificial intelligence (AI) mechanisms on a dataset of 298k real news and 8.9k fake news in the period 2019–2022. Results show that very accurate predictions can be reached. In general words, the use of long short-term memory (LSTM) with attention mechanisms offers the best performance, being headlines useful when a small amount of days is taken as input. In the best cases, when predictions are made for periods, an error of 10.3% is made considering the mean of fake news. This error raises to 28.7% when predicting a single day in the future.This work was supported in part by the Universidad Carlos III de Madrid (UC3M) and the Government of Madrid [Community of Madrid (CAM)] under Grant DEPROFAKE-CM-UC3M; in part by the CAM through the Project CYNAMON, co-funded by the European Research Development Fund (ERDF), under Grant P2018/TCS-4566-CM; and in part by the Spanish Ministry of Science and Innovation (MICINN) of Spain under Grant PID2019-111429RB-C2

    Fake news: neutral or active librarian?

    Get PDF
    The paper deals with the role of libraries and librarians in the fight against fake news, mainly focusing on four issues: 1) the relationship between content selection and content evaluation, which is relevant even in dealing with free on-line content and its discovery tools; 2) the relationship between the librarian’s neutrality and her/his social responsibility in the wake of the growth and the danger of fake news; 3) the two different paradigms represented by ‘neutral’ and ‘active’ or ‘expert’ librarian; 4) the relevance for this discussion of the distinction between different kinds of fake news.L’articolo si sofferma sul ruolo di biblioteche e bibliotecari nel contrasto alle fake news, affrontando in particolare quattro temi: 1) il rapporto fra selezione e valutazione delle fonti, anche nelle nuove forme legate ai contenuti liberamente disponibili in rete e ai relativi discovery tools; 2) il rapporto fra imparzialità e responsabilità sociale del bibliotecario davanti alla pericolosità di alcune fake news; 3) la questione – strettamente connessa alla precedente – del rapporto fra bibliotecario ‘neutrale’ e bibliotecario ‘attivo’ o ‘esperto’; 4) l’importanza che ha rispetto a questo dibattito la distinzione fra tipologie diverse di fake news
    • 

    corecore