188 research outputs found
Zoom Out and Observe: News Environment Perception for Fake News Detection
Fake news detection is crucial for preventing the dissemination of
misinformation on social media. To differentiate fake news from real ones,
existing methods observe the language patterns of the news post and "zoom in"
to verify its content with knowledge sources or check its readers' replies.
However, these methods neglect the information in the external news environment
where a fake news post is created and disseminated. The news environment
represents recent mainstream media opinion and public attention, which is an
important inspiration of fake news fabrication because fake news is often
designed to ride the wave of popular events and catch public attention with
unexpected novel content for greater exposure and spread. To capture the
environmental signals of news posts, we "zoom out" to observe the news
environment and propose the News Environment Perception Framework (NEP). For
each post, we construct its macro and micro news environment from recent
mainstream news. Then we design a popularity-oriented and a novelty-oriented
module to perceive useful signals and further assist final prediction.
Experiments on our newly built datasets show that the NEP can efficiently
improve the performance of basic fake news detectors.Comment: ACL 2022 Main Conference (Long Paper
Improving Fake News Detection of Influential Domain via Domain- and Instance-Level Transfer
Both real and fake news in various domains, such as politics, health, and
entertainment are spread via online social media every day, necessitating fake
news detection for multiple domains. Among them, fake news in specific domains
like politics and health has more serious potential negative impacts on the
real world (e.g., the infodemic led by COVID-19 misinformation). Previous
studies focus on multi-domain fake news detection, by equally mining and
modeling the correlation between domains. However, these multi-domain methods
suffer from a seesaw problem: the performance of some domains is often improved
at the cost of hurting the performance of other domains, which could lead to an
unsatisfying performance in specific domains. To address this issue, we propose
a Domain- and Instance-level Transfer Framework for Fake News Detection
(DITFEND), which could improve the performance of specific target domains. To
transfer coarse-grained domain-level knowledge, we train a general model with
data of all domains from the meta-learning perspective. To transfer
fine-grained instance-level knowledge and adapt the general model to a target
domain, we train a language model on the target domain to evaluate the
transferability of each data instance in source domains and re-weigh each
instance's contribution. Offline experiments on two datasets demonstrate the
effectiveness of DITFEND. Online experiments show that DITFEND brings
additional improvements over the base models in a real-world scenario.Comment: Accepted by COLING 2022. The 29th International Conference on
Computational Linguistics, Gyeongju, Republic of Kore
Personalized Prompt for Sequential Recommendation
Pre-training models have shown their power in sequential recommendation.
Recently, prompt has been widely explored and verified for tuning in NLP
pre-training, which could help to more effectively and efficiently extract
useful knowledge from pre-training models for downstream tasks, especially in
cold-start scenarios. However, it is challenging to bring prompt-tuning from
NLP to recommendation, since the tokens in recommendation (i.e., items) do not
have explicit explainable semantics, and the sequence modeling should be
personalized. In this work, we first introduces prompt to recommendation and
propose a novel Personalized prompt-based recommendation (PPR) framework for
cold-start recommendation. Specifically, we build the personalized soft prefix
prompt via a prompt generator based on user profiles and enable a sufficient
training of prompts via a prompt-oriented contrastive learning with both
prompt- and behavior-based augmentations. We conduct extensive evaluations on
various tasks. In both few-shot and zero-shot recommendation, PPR models
achieve significant improvements over baselines on various metrics in three
large-scale open datasets. We also conduct ablation tests and sparsity analysis
for a better understanding of PPR. Moreover, We further verify PPR's
universality on different pre-training models, and conduct explorations on
PPR's other promising downstream tasks including cross-domain recommendation
and user profile prediction
Modeling the Field Value Variations and Field Interactions Simultaneously for Fraud Detection
With the explosive growth of e-commerce, online transaction fraud has become
one of the biggest challenges for e-commerce platforms. The historical
behaviors of users provide rich information for digging into the users' fraud
risk. While considerable efforts have been made in this direction, a
long-standing challenge is how to effectively exploit internal user information
and provide explainable prediction results. In fact, the value variations of
same field from different events and the interactions of different fields
inside one event have proven to be strong indicators for fraudulent behaviors.
In this paper, we propose the Dual Importance-aware Factorization Machines
(DIFM), which exploits the internal field information among users' behavior
sequence from dual perspectives, i.e., field value variations and field
interactions simultaneously for fraud detection. The proposed model is deployed
in the risk management system of one of the world's largest e-commerce
platforms, which utilize it to provide real-time transaction fraud detection.
Experimental results on real industrial data from different regions in the
platform clearly demonstrate that our model achieves significant improvements
compared with various state-of-the-art baseline models. Moreover, the DIFM
could also give an insight into the explanation of the prediction results from
dual perspectives.Comment: 11 pages, 4 figure
- …