71 research outputs found

    TIB's Visual Analytics Group at MediaEval '20: Detecting Fake News on Corona Virus and 5G Conspiracy

    Get PDF
    Fake news on social media has become a hot topic of research as it negatively impacts the discourse of real news in the public. Specifically, the ongoing COVID-19 pandemic has seen a rise of inaccurate and misleading information due to the surrounding controversies and unknown details at the beginning of the pandemic. The FakeNews task at MediaEval 2020 tackles this problem by creating a challenge to automatically detect tweets containing misinformation based on text and structure from Twitter follower network. In this paper, we present a simple approach that uses BERT embeddings and a shallow neural network for classifying tweets using only text, and discuss our findings and limitations of the approach in text-based misinformation detection.Comment: MediaEval 2020 Fake News Tas

    On the Role of Images for Analyzing Claims in Social Media

    Get PDF
    Fake news is a severe problem in social media. In this paper, we present an empirical study on visual, textual, and multimodal models for the tasks of claim, claim check-worthiness, and conspiracy detection, all of which are related to fake news detection. Recent work suggests that images are more influential than text and often appear alongside fake text. To this end, several multimodal models have been proposed in recent years that use images along with text to detect fake news on social media sites like Twitter. However, the role of images is not well understood for claim detection, specifically using transformer-based textual and multimodal models. We investigate state-of-the-art models for images, text (Transformer-based), and multimodal information for four different datasets across two languages to understand the role of images in the task of claim and conspiracy detection

    Analysis of Neutral Higgs-Boson Contributions to the Decays B_s -> l^+l^- and B -> K l^+l^-

    Get PDF
    We report on a calculation of Higgs-boson contributions to the decays B_s -> l^+l^- and B -> K l^+l^- (l=e, mu) which are governed by the effective Hamiltonian describing b -> s l^+ l^-. Compact formulae for the Wilson coefficients are provided in the context of the type-II two-Higgs-doublet model (2HDM) and supersymmetry (SUSY) with minimal flavour violation, focusing on the case of large tan(beta). We derive, in a model-independent way, constraints on Higgs-boson-mediated interactions, using present experimental results on rare B decays including b -> s gamma, B_s -> mu^+ mu^-, and B -> K^(*) mu^+ mu^-. In particular, we assess the impact of possible scalar and pseudoscalar interactions transcending the standard model (SM) on the branching ratio of B_s -> mu^+ mu^- and the forward-backward (FB) asymmetry of mu^- in B -> K mu^+ mu^- decay. We find that the average FB asymmetry, which is unobservably small within the SM, and therefore a potentially valuable tool to search for new physics, is predicted to be no greater than 4% for a nominal branching ratio of about 6x10^{-7}. Moreover, striking effects on the decay spectrum of B -> K mu^+ mu^- are already ruled out by experimental data on the B_s -> mu^+ mu^- branching fraction. In addition, we study the constraints on the parameter space of the 2HDM and SUSY with minimal flavour violation. While the type-II 2HDM does not give any sizable contributions to the above decay modes, we find that SUSY contributions obeying the constraint on b -> s gamma can affect significantly the branching ratio of B_s -> mu^+ mu^-. We also comment on previous calculations contained in the literature.Comment: 29 pages, REVTeX, 8 figures. Minor corrections in Eqs. (5.4), (5.11) and (6.3) of the published versio

    A Fair and Comprehensive Comparison of Multimodal Tweet Sentiment Analysis Methods

    Get PDF
    Opinion and sentiment analysis is a vital task to characterize subjective information in social media posts. In this paper, we present a comprehensive experimental evaluation and comparison with six state-of-the-art methods, from which we have re-implemented one of them. In addition, we investigate different textual and visual feature embeddings that cover different aspects of the content, as well as the recently introduced multimodal CLIP embeddings. Experimental results are presented for two different publicly available benchmark datasets of tweets and corresponding images. In contrast to the evaluation methodology of previous work, we introduce a reproducible and fair evaluation scheme to make results comparable. Finally, we conduct an error analysis to outline the limitations of the methods and possibilities for the future work.Comment: Accepted in Workshop on Multi-ModalPre-Training for Multimedia Understanding (MMPT 2021), co-located with ICMR 202

    Understanding image-text relations and news values for multimodal news analysis

    Get PDF
    The analysis of news dissemination is of utmost importance since the credibility of information and the identification of disinformation and misinformation affect society as a whole. Given the large amounts of news data published daily on the Web, the empirical analysis of news with regard to research questions and the detection of problematic news content on the Web require computational methods that work at scale. Today's online news are typically disseminated in a multimodal form, including various presentation modalities such as text, image, audio, and video. Recent developments in multimodal machine learning now make it possible to capture basic “descriptive” relations between modalities–such as correspondences between words and phrases, on the one hand, and corresponding visual depictions of the verbally expressed information on the other. Although such advances have enabled tremendous progress in tasks like image captioning, text-to-image generation and visual question answering, in domains such as news dissemination, there is a need to go further. In this paper, we introduce a novel framework for the computational analysis of multimodal news. We motivate a set of more complex image-text relations as well as multimodal news values based on real examples of news reports and consider their realization by computational approaches. To this end, we provide (a) an overview of existing literature from semiotics where detailed proposals have been made for taxonomies covering diverse image-text relations generalisable to any domain; (b) an overview of computational work that derives models of image-text relations from data; and (c) an overview of a particular class of news-centric attributes developed in journalism studies called news values. The result is a novel framework for multimodal news analysis that closes existing gaps in previous work while maintaining and combining the strengths of those accounts. We assess and discuss the elements of the framework with real-world examples and use cases, setting out research directions at the intersection of multimodal learning, multimodal analytics and computational social sciences that can benefit from our approach
    corecore