2,697 research outputs found
A Topic-Agnostic Approach for Identifying Fake News Pages
Fake news and misinformation have been increasingly used to manipulate
popular opinion and influence political processes. To better understand fake
news, how they are propagated, and how to counter their effect, it is necessary
to first identify them. Recently, approaches have been proposed to
automatically classify articles as fake based on their content. An important
challenge for these approaches comes from the dynamic nature of news: as new
political events are covered, topics and discourse constantly change and thus,
a classifier trained using content from articles published at a given time is
likely to become ineffective in the future. To address this challenge, we
propose a topic-agnostic (TAG) classification strategy that uses linguistic and
web-markup features to identify fake news pages. We report experimental results
using multiple data sets which show that our approach attains high accuracy in
the identification of fake news, even as topics evolve over time.Comment: Accepted for publication in the Companion Proceedings of the 2019
World Wide Web Conference (WWW'19 Companion). Presented in the 2019
International Workshop on Misinformation, Computational Fact-Checking and
Credible Web (MisinfoWorkshop2019). 6 page
Co-Following on Twitter
We present an in-depth study of co-following on Twitter based on the
observation that two Twitter users whose followers have similar friends are
also similar, even though they might not share any direct links or a single
mutual follower. We show how this observation contributes to (i) a better
understanding of language-agnostic user classification on Twitter, (ii)
eliciting opportunities for Computational Social Science, and (iii) improving
online marketing by identifying cross-selling opportunities.
We start with a machine learning problem of predicting a user's preference
among two alternative choices of Twitter friends. We show that co-following
information provides strong signals for diverse classification tasks and that
these signals persist even when (i) the most discriminative features are
removed and (ii) only relatively "sparse" users with fewer than 152 but more
than 43 Twitter friends are considered.
Going beyond mere classification performance optimization, we present
applications of our methodology to Computational Social Science. Here we
confirm stereotypes such as that the country singer Kenny Chesney
(@kennychesney) is more popular among @GOP followers, whereas Lady Gaga
(@ladygaga) enjoys more support from @TheDemocrats followers.
In the domain of marketing we give evidence that celebrity endorsement is
reflected in co-following and we demonstrate how our methodology can be used to
reveal the audience similarities between Apple and Puma and, less obviously,
between Nike and Coca-Cola. Concerning a user's popularity we find a
statistically significant connection between having a more "average"
followership and having more followers than direct rivals. Interestingly, a
\emph{larger} audience also seems to be linked to a \emph{less diverse}
audience in terms of their co-following.Comment: full version of a short paper at Hypertext 201
False News On Social Media: A Data-Driven Survey
In the past few years, the research community has dedicated growing interest
to the issue of false news circulating on social networks. The widespread
attention on detecting and characterizing false news has been motivated by
considerable backlashes of this threat against the real world. As a matter of
fact, social media platforms exhibit peculiar characteristics, with respect to
traditional news outlets, which have been particularly favorable to the
proliferation of deceptive information. They also present unique challenges for
all kind of potential interventions on the subject. As this issue becomes of
global concern, it is also gaining more attention in academia. The aim of this
survey is to offer a comprehensive study on the recent advances in terms of
detection, characterization and mitigation of false news that propagate on
social media, as well as the challenges and the open questions that await
future research on the field. We use a data-driven approach, focusing on a
classification of the features that are used in each study to characterize
false information and on the datasets used for instructing classification
methods. At the end of the survey, we highlight emerging approaches that look
most promising for addressing false news
Knowledge-Enhanced Hierarchical Information Correlation Learning for Multi-Modal Rumor Detection
The explosive growth of rumors with text and images on social media platforms
has drawn great attention. Existing studies have made significant contributions
to cross-modal information interaction and fusion, but they fail to fully
explore hierarchical and complex semantic correlation across different modality
content, severely limiting their performance on detecting multi-modal rumor. In
this work, we propose a novel knowledge-enhanced hierarchical information
correlation learning approach (KhiCL) for multi-modal rumor detection by
jointly modeling the basic semantic correlation and high-order
knowledge-enhanced entity correlation. Specifically, KhiCL exploits cross-modal
joint dictionary to transfer the heterogeneous unimodality features into the
common feature space and captures the basic cross-modal semantic consistency
and inconsistency by a cross-modal fusion layer. Moreover, considering the
description of multi-modal content is narrated around entities, KhiCL extracts
visual and textual entities from images and text, and designs a knowledge
relevance reasoning strategy to find the shortest semantic relevant path
between each pair of entities in external knowledge graph, and absorbs all
complementary contextual knowledge of other connected entities in this path for
learning knowledge-enhanced entity representations. Furthermore, KhiCL utilizes
a signed attention mechanism to model the knowledge-enhanced entity consistency
and inconsistency of intra-modality and inter-modality entity pairs by
measuring their corresponding semantic relevant distance. Extensive experiments
have demonstrated the effectiveness of the proposed method
Misinformation Containment Using NLP and Machine Learning: Why the Problem Is Still Unsolved
Despite the increased attention and substantial research into it claiming outstanding successes, the problem of misinformation containment has only been growing in the recent years with not many signs of respite. Misinformation is rapidly changing its latent characteristics and spreading vigorously in a multi-modal fashion, sometimes in a more damaging manner than viruses and other malicious programs on the internet. This chapter examines the existing research in natural language processing and machine learning to stop the spread of misinformation, analyzes why the research has not been practical enough to be incorporated into social media platforms, and provides future research directions. The state-of-the-art feature engineering, approaches, and algorithms used for the problem are expounded in the process
Interpretable machine learning in natural language processing for misinformation data
Mini Dissertation (MIT (Big Data Science))--University of Pretoria, 2022.The interpretability of models has been one of the focal research topics in the machine
learning community due to a rise in the use of black box models and complex
state-of-the-art models [6]. Most of these models are debugged through trial and error,
based on end-to-end learning [7, 48]. This creates some uneasiness and distrust among
the end-user consumers of the models, which has resulted in limited use of black box
models in disciplines where explainability is required [33]. However, alternative models,
”white-box models,” come with a trade-off of accuracy and predictive power [7]. This research
focuses on interpretability in natural language processing for misinformation data.
First, we explore example-based techniques through prototype selection to determine if
we can observe any key behavioural insights from a misinformation dataset. We use
four prototype selection techniques: Clustering, Set Cover, MMD-critic, and Influential
examples. We analyse the quality of each technique’s prototype set and use two prototype
sets that have the optimal quality to further process for word analysis, linguistic
characteristics, and together with the LIME technique for interpretability. Secondly, we
compare if there are any critical insights in the South African disinformation context.Computer ScienceMIT (Big Data Science)Unrestricte
- …