11,600 research outputs found

    Visual and Textual Analysis for Image Trustworthiness Assessment within Online News

    Get PDF
    The majority of news published online presents one or more images or videos, which make the news more easily consumed and therefore more attractive to huge audiences. As a consequence, news with catchy multimedia content can be spread and get viral extremely quickly. Unfortunately, the availability and sophistication of photo editing software are erasing the line between pristine and manipulated content. Given that images have the power of bias and influence the opinion and behavior of readers, the need of automatic techniques to assess the authenticity of images is straightforward. This paper aims at detecting images published within online news that have either been maliciously modified or that do not represent accurately the event the news is mentioning. The proposed approach composes image forensic algorithms for detecting image tampering, and textual analysis as a verifier of images that are misaligned to textual content. Furthermore, textual analysis can be considered as a complementary source of information supporting image forensics techniques when they falsely detect or falsely ignore image tampering due to heavy image postprocessing. The devised method is tested on three datasets. The performance on the first two shows interesting results, with F1-score generally higher than 75%. The third dataset has an exploratory intent; in fact, although showing that the methodology is not ready for completely unsupervised scenarios, it is possible to investigate possible problems and controversial cases that might arise in real-world scenarios

    FACTS-ON : Fighting Against Counterfeit Truths in Online social Networks : fake news, misinformation and disinformation

    Full text link
    L'Ă©volution rapide des rĂ©seaux sociaux en ligne (RSO) reprĂ©sente un dĂ©fi significatif dans l'identification et l'attĂ©nuation des fausses informations, incluant les fausses nouvelles, la dĂ©sinformation et la mĂ©sinformation. Cette complexitĂ© est amplifiĂ©e dans les environnements numĂ©riques oĂč les informations sont rapidement diffusĂ©es, nĂ©cessitant des stratĂ©gies sophistiquĂ©es pour diffĂ©rencier le contenu authentique du faux. L'un des principaux dĂ©fis dans la dĂ©tection automatique de fausses informations est leur prĂ©sentation rĂ©aliste, ressemblant souvent de prĂšs aux faits vĂ©rifiables. Cela pose de considĂ©rables dĂ©fis aux systĂšmes d'intelligence artificielle (IA), nĂ©cessitant des donnĂ©es supplĂ©mentaires de sources externes, telles que des vĂ©rifications par des tiers, pour discerner efficacement la vĂ©ritĂ©. Par consĂ©quent, il y a une Ă©volution technologique continue pour contrer la sophistication croissante des fausses informations, mettant au dĂ©fi et avançant les capacitĂ©s de l'IA. En rĂ©ponse Ă  ces dĂ©fis, ma thĂšse introduit le cadre FACTS-ON (Fighting Against Counterfeit Truths in Online Social Networks), une approche complĂšte et systĂ©matique pour combattre la dĂ©sinformation dans les RSO. FACTS-ON intĂšgre une sĂ©rie de systĂšmes avancĂ©s, chacun s'appuyant sur les capacitĂ©s de son prĂ©dĂ©cesseur pour amĂ©liorer la stratĂ©gie globale de dĂ©tection et d'attĂ©nuation des fausses informations. Je commence par prĂ©senter le cadre FACTS-ON, qui pose les fondements de ma solution, puis je dĂ©taille chaque systĂšme au sein du cadre : EXMULF (Explainable Multimodal Content-based Fake News Detection) se concentre sur l'analyse du texte et des images dans les contenus en ligne en utilisant des techniques multimodales avancĂ©es, couplĂ©es Ă  une IA explicable pour fournir des Ă©valuations transparentes et comprĂ©hensibles des fausses informations. En s'appuyant sur les bases d'EXMULF, MythXpose (Multimodal Content and Social Context-based System for Explainable False Information Detection with Personality Prediction) ajoute une couche d'analyse du contexte social en prĂ©disant les traits de personnalitĂ© des utilisateurs des RSO, amĂ©liorant la dĂ©tection et les stratĂ©gies d'intervention prĂ©coce contre la dĂ©sinformation. ExFake (Explainable False Information Detection Based on Content, Context, and External Evidence) Ă©largit encore le cadre, combinant l'analyse de contenu avec des insights du contexte social et des preuves externes. Il tire parti des donnĂ©es d'organisations de vĂ©rification des faits rĂ©putĂ©es et de comptes officiels, garantissant une approche plus complĂšte et fiable de la dĂ©tection de la dĂ©sinformation. La mĂ©thodologie sophistiquĂ©e d'ExFake Ă©value non seulement le contenu des publications en ligne, mais prend Ă©galement en compte le contexte plus large et corrobore les informations avec des sources externes crĂ©dibles, offrant ainsi une solution bien arrondie et robuste pour combattre les fausses informations dans les rĂ©seaux sociaux en ligne. ComplĂ©tant le cadre, AFCC (Automated Fact-checkers Consensus and Credibility) traite l'hĂ©tĂ©rogĂ©nĂ©itĂ© des Ă©valuations des diffĂ©rentes organisations de vĂ©rification des faits. Il standardise ces Ă©valuations et Ă©value la crĂ©dibilitĂ© des sources, fournissant une Ă©valuation unifiĂ©e et fiable de l'information. Chaque systĂšme au sein du cadre FACTS-ON est rigoureusement Ă©valuĂ© pour dĂ©montrer son efficacitĂ© dans la lutte contre la dĂ©sinformation sur les RSO. Cette thĂšse dĂ©taille le dĂ©veloppement, la mise en Ɠuvre et l'Ă©valuation complĂšte de ces systĂšmes, soulignant leur contribution collective au domaine de la dĂ©tection des fausses informations. La recherche ne met pas seulement en Ă©vidence les capacitĂ©s actuelles dans la lutte contre la dĂ©sinformation, mais prĂ©pare Ă©galement le terrain pour de futures avancĂ©es dans ce domaine critique d'Ă©tude.The rapid evolution of online social networks (OSN) presents a significant challenge in identifying and mitigating false information, which includes Fake News, Disinformation, and Misinformation. This complexity is amplified in digital environments where information is quickly disseminated, requiring sophisticated strategies to differentiate between genuine and false content. One of the primary challenges in automatically detecting false information is its realistic presentation, often closely resembling verifiable facts. This poses considerable challenges for artificial intelligence (AI) systems, necessitating additional data from external sources, such as third-party verifications, to effectively discern the truth. Consequently, there is a continuous technological evolution to counter the growing sophistication of false information, challenging and advancing the capabilities of AI. In response to these challenges, my dissertation introduces the FACTS-ON framework (Fighting Against Counterfeit Truths in Online Social Networks), a comprehensive and systematic approach to combat false information in OSNs. FACTS-ON integrates a series of advanced systems, each building upon the capabilities of its predecessor to enhance the overall strategy for detecting and mitigating false information. I begin by introducing the FACTS-ON framework, which sets the foundation for my solution, and then detail each system within the framework: EXMULF (Explainable Multimodal Content-based Fake News Detection) focuses on analyzing both text and image in online content using advanced multimodal techniques, coupled with explainable AI to provide transparent and understandable assessments of false information. Building upon EXMULF’s foundation, MythXpose (Multimodal Content and Social Context-based System for Explainable False Information Detection with Personality Prediction) adds a layer of social context analysis by predicting the personality traits of OSN users, enhancing the detection and early intervention strategies against false information. ExFake (Explainable False Information Detection Based on Content, Context, and External Evidence) further expands the framework, combining content analysis with insights from social context and external evidence. It leverages data from reputable fact-checking organizations and official social accounts, ensuring a more comprehensive and reliable approach to the detection of false information. ExFake's sophisticated methodology not only evaluates the content of online posts but also considers the broader context and corroborates information with external, credible sources, thereby offering a well-rounded and robust solution for combating false information in online social networks. Completing the framework, AFCC (Automated Fact-checkers Consensus and Credibility) addresses the heterogeneity of ratings from various fact-checking organizations. It standardizes these ratings and assesses the credibility of the sources, providing a unified and trustworthy assessment of information. Each system within the FACTS-ON framework is rigorously evaluated to demonstrate its effectiveness in combating false information on OSN. This dissertation details the development, implementation, and comprehensive evaluation of these systems, highlighting their collective contribution to the field of false information detection. The research not only showcases the current capabilities in addressing false information but also sets the stage for future advancements in this critical area of study

    Critical Digital Literacy: EFL Students’ Ability to Evaluate Online Sources

    Get PDF
    The advancement of Internet-based technologies and the new media ecology have contributed to the increased reliance on online sources in both the academic and the non-academic contexts. This study investigated how students evaluated the credibility of online information and the bias that might have influenced the content of the information. 152 EFL students responded to the online critical literacy assessment, which consisted of six tasks: evaluating the credibility of visual information, evaluating WhatsApp message, comparing and evaluating websites, distinguishing between news and sponsored content, evaluating the credibility of claim in a YouTube video, and evaluating an Instagram post. The results of the study showed that the students were easily deceived by the online information they read from various online media. They particularly struggled to detect the unsubstantiated claims from the YouTube video. Despite being a generation Z who frequently used social media and various online sources in their daily lives, the students could not critically evaluate the claims posted on these platforms. Implications of this study include the need to incorporate critical digital literacy in the language skill courses and deliberate exposure to strategies in evaluating online sources

    Deception Detection and Rumor Debunking for Social Media

    Get PDF
    Abstract The main premise of this chapter is that the time is ripe for more extensive research and development of social media tools that filter out intentionally deceptive information such as deceptive memes, rumors and hoaxes, fake news or other fake posts, tweets and fraudulent profiles. Social media users’ awareness of intentional manipulation of online content appears to be relatively low, while the reliance on unverified information (often obtained from strangers) is at an all-time high. I argue there is need for content verification, systematic fact-checking and filtering of social media streams. This literature survey provides a background for understanding current automated deception detection research, rumor debunking, and broader content verification methodologies, suggests a path towards hybrid technologies, and explains why the development and adoption of such tools might still be a significant challenge

    Media forensics on social media platforms: a survey

    Get PDF
    The dependability of visual information on the web and the authenticity of digital media appearing virally in social media platforms has been raising unprecedented concerns. As a result, in the last years the multimedia forensics research community pursued the ambition to scale the forensic analysis to real-world web-based open systems. This survey aims at describing the work done so far on the analysis of shared data, covering three main aspects: forensics techniques performing source identification and integrity verification on media uploaded on social networks, platform provenance analysis allowing to identify sharing platforms, and multimedia verification algorithms assessing the credibility of media objects in relation to its associated textual information. The achieved results are highlighted together with current open issues and research challenges to be addressed in order to advance the field in the next future

    Plain Tobacco Packaging: A Systematic Review

    Get PDF
    (From the Executive Summary): This systematic review outlines findings from 37 studies that provide evidence of the impacts of plain tobacco packaging. The review was conducted following the publication of the March 2011 White Paper Healthy Lives: Healthy People which set out a renewed Tobacco Control Plan for England. One of the key actions identified in the plan was to consult on possible options to reduce the promotional impact of tobacco packaging, including plain packaging. This systematic review was commissioned to provide a comprehensive overview of evidence on the impact of plain packaging in order to inform a public consultation on the issue

    Individual Differences in the Evaluation of Online Images

    Get PDF
    This item is only available electronically.The internet has facilitated the proliferation of misleading and conspiratorial content that has led to increasing distrust in government and other major institutions. Although such content is often textual (e.g. misleading accounts of major events), it is also often accompanied by out-of-context or doctored images that support particular views. Despite many studies into conspiracy theory (CT) beliefs, relatively little psychological research has been conducted to examine whether certain people are more, or less, susceptible to visual manipulations in online environments. This study examined individual differences in the perception of image credibility and how this relates to pre-existing CT beliefs. The study involved participants assigning credibility ratings to images in a 2 fake/real x 2 CT/non-CT related design. A total of 329 online participants were presented with original or highly edited images of real-world scenes: half were CT-related and the other half were not. Performance was measured by the difference between credibility ratings assigned to real vs manipulated images. Consistent with study predictions, individuals with high conspiracy beliefs performed significantly worse in discriminating between fake and real images. This effect was stronger when images depicted CT related content. This research contributes to the limited research related to online visual deception by showing how people who have stronger CT beliefs find it harder to discriminate real from manipulated content.Thesis (B.PsychSc(Hons)) -- University of Adelaide, School of Psychology, 202

    Face and trust: A semiotic inquiry into influencers, money, and amygdala

    Get PDF
    After the cultural explosion of Web 2.0, digital culture reveals an apparently semiotic paradox associated with the incredibly widespread use of images of faces, while at the same time the reason to trust in the authenticity of these faces is constantly declining. This is because graphic technology has made the sophisticated manipulation of images both possible and easy. After a review of the existing semiotic models and considerations of trust, I am proposing a new approach which emphasizes the value-generating properties of trust by analogy with the money sign, seen as “trust inscribed”. Research from the neurosciences supports the hypothesis that the trustworthiness of the face is judged pre-reflexively and primordially. This, therefore, means that a trustworthy face is a premise for more successful communication than an untrustworthy one, notwithstanding the object of discussion and the cultural context. An example concerning social media influencers serves to show that in the internet-dominated globalizing culture, trustworthy faces are a multipurpose communicative asset that makes a difference.    &nbsp
    • 

    corecore