175 research outputs found

    Extracting attributed verification and debunking reports from social media: MediaEval-2015 trust and credibility analysis of image and video

    No full text
    Journalists are increasingly turning to technology for pre-filtering and automation of the simpler parts of the verification process. We present results from our semi-automated approach to trust and credibility analysis of tweets referencing suspicious images and videos. We use natural language processing to extract evidence from tweets in the form of fake & genuine claims attributed to trusted and untrusted sources. Results for team UoS-ITI in theMediaEval 2015 Verifying Multimedia Use task are reported. Our 'fake' tweet classifier precision scores range from 0.94 to 1.0 (recall 0.43 to 0.72), and our 'real' tweet classifier precision scores range from 0.74 to 0.78 (recall 0.51 to 0.74). Image classification precision scores range from 0.62 to 1.0 (recall 0.04 to 0.23). Our approach can automatically alert journalists in real-time to trustworthy claims verifying or debunking viral images or video

    Deep Multimodal Image-Repurposing Detection

    Full text link
    Nefarious actors on social media and other platforms often spread rumors and falsehoods through images whose metadata (e.g., captions) have been modified to provide visual substantiation of the rumor/falsehood. This type of modification is referred to as image repurposing, in which often an unmanipulated image is published along with incorrect or manipulated metadata to serve the actor's ulterior motives. We present the Multimodal Entity Image Repurposing (MEIR) dataset, a substantially challenging dataset over that which has been previously available to support research into image repurposing detection. The new dataset includes location, person, and organization manipulations on real-world data sourced from Flickr. We also present a novel, end-to-end, deep multimodal learning model for assessing the integrity of an image by combining information extracted from the image with related information from a knowledge base. The proposed method is compared against state-of-the-art techniques on existing datasets as well as MEIR, where it outperforms existing methods across the board, with AUC improvement up to 0.23.Comment: To be published at ACM Multimeda 2018 (orals

    Web Video Verification using Contextual Cues

    Get PDF
    As news agencies and the public increasingly rely on User-Generated Content, content verification is vital for news producers and consumers alike. We present a novel approach for verifying Web videos by analyzing their online context. It is based on supervised learning on contextual features: one feature set is based on an existing approach for tweet verification adapted to video comments. The other is based on video metadata, such as the video description, likes/dislikes, and uploader information. We evaluate both on a dataset of real and fake videos from YouTube, and demonstrate their effectiveness (F-scores: 0.82, 0.79). We then explore their complementarity and show that under an optimal fusion scheme, the classifier would reach an F-score of 0.9. We finally study the performance of the classifier through time, as more comments accumulate, emulating a real-time verification setting

    Improving Generalization for Multimodal Fake News Detection

    Full text link
    The increasing proliferation of misinformation and its alarming impact have motivated both industry and academia to develop approaches for fake news detection. However, state-of-the-art approaches are usually trained on datasets of smaller size or with a limited set of specific topics. As a consequence, these models lack generalization capabilities and are not applicable to real-world data. In this paper, we propose three models that adopt and fine-tune state-of-the-art multimodal transformers for multimodal fake news detection. We conduct an in-depth analysis by manipulating the input data aimed to explore models performance in realistic use cases on social media. Our study across multiple models demonstrates that these systems suffer significant performance drops against manipulated data. To reduce the bias and improve model generalization, we suggest training data augmentation to conduct more meaningful experiments for fake news detection on social media. The proposed data augmentation techniques enable models to generalize better and yield improved state-of-the-art results.Comment: This paper has been accepted for ICMR 202

    Cross-Lingual Cross-Platform Rumor Verification Pivoting on Multimedia Content

    Full text link
    With the increasing popularity of smart devices, rumors with multimedia content become more and more common on social networks. The multimedia information usually makes rumors look more convincing. Therefore, finding an automatic approach to verify rumors with multimedia content is a pressing task. Previous rumor verification research only utilizes multimedia as input features. We propose not to use the multimedia content but to find external information in other news platforms pivoting on it. We introduce a new features set, cross-lingual cross-platform features that leverage the semantic similarity between the rumors and the external information. When implemented, machine learning methods utilizing such features achieved the state-of-the-art rumor verification results

    Visual and Textual Analysis for Image Trustworthiness Assessment within Online News

    Get PDF
    The majority of news published online presents one or more images or videos, which make the news more easily consumed and therefore more attractive to huge audiences. As a consequence, news with catchy multimedia content can be spread and get viral extremely quickly. Unfortunately, the availability and sophistication of photo editing software are erasing the line between pristine and manipulated content. Given that images have the power of bias and influence the opinion and behavior of readers, the need of automatic techniques to assess the authenticity of images is straightforward. This paper aims at detecting images published within online news that have either been maliciously modified or that do not represent accurately the event the news is mentioning. The proposed approach composes image forensic algorithms for detecting image tampering, and textual analysis as a verifier of images that are misaligned to textual content. Furthermore, textual analysis can be considered as a complementary source of information supporting image forensics techniques when they falsely detect or falsely ignore image tampering due to heavy image postprocessing. The devised method is tested on three datasets. The performance on the first two shows interesting results, with F1-score generally higher than 75%. The third dataset has an exploratory intent; in fact, although showing that the methodology is not ready for completely unsupervised scenarios, it is possible to investigate possible problems and controversial cases that might arise in real-world scenarios

    Automatically estimating emotion in music with deep long-short term memory recurrent neural networks

    Get PDF
    In this paper we describe our approach for the MediaEval's "Emotion in Music" task. Our method consists of deep Long-Short Term Memory Recurrent Neural Networks (LSTM-RNN) for dynamic Arousal and Valence regression, using acoustic and psychoacoustic features extracted from the songs that have been previously proven as effective for emotion prediction in music. Results on the challenge test demonstrate an excellent performance for Arousal estimation (r = 0.613 ± 0.278), but not for Valence (r = 0.026 ± 0.500). Issues regarding the quality of the test set annotations' reliability and distributions are indicated as plausible justifications for these results. By using a subset of the development set that was left out for performance estimation, we could determine that the performance of our approach may be underestimated for Valence (Arousal: r = 0.596 ± 0.386; Valence: r = 0.458 ± 0.551)
    • …
    corecore