2,341 research outputs found
Single-User Injection for Invisible Shilling Attack against Recommender Systems
Recommendation systems (RS) are crucial for alleviating the information
overload problem. Due to its pivotal role in guiding users to make decisions,
unscrupulous parties are lured to launch attacks against RS to affect the
decisions of normal users and gain illegal profits. Among various types of
attacks, shilling attack is one of the most subsistent and profitable attacks.
In shilling attack, an adversarial party injects a number of well-designed fake
user profiles into the system to mislead RS so that the attack goal can be
achieved. Although existing shilling attack methods have achieved promising
results, they all adopt the attack paradigm of multi-user injection, where some
fake user profiles are required. This paper provides the first study of
shilling attack in an extremely limited scenario: only one fake user profile is
injected into the victim RS to launch shilling attacks (i.e., single-user
injection). We propose a novel single-user injection method SUI-Attack for
invisible shilling attack. SUI-Attack is a graph based attack method that
models shilling attack as a node generation task over the user-item bipartite
graph of the victim RS, and it constructs the fake user profile by generating
user features and edges that link the fake user to items. Extensive experiments
demonstrate that SUI-Attack can achieve promising attack results in single-user
injection. In addition to its attack power, SUI-Attack increases the
stealthiness of shilling attack and reduces the risk of being detected. We
provide our implementation at: https://github.com/KDEGroup/SUI-Attack.Comment: CIKM 2023. 10 pages, 5 figure
Predictability Issues in Recommender Systems Based on Web Usage Behavior towards Robust Collaborative Filtering
This paper examines the effect of Recommender Systems in security oriented issues. Currently research has begun to evaluate the vulnerabilities and robustness of various collaborative recommender techniques in the face of profile injection and shilling attacks. Standard collaborative filtering algorithms are vulnerable to attacks. The robustness of recommender system and the impact of attacks are well suited this study and examined in this paper. The predictability issues and the various attack strategies are also discussed. Based on KNN the robustness of the recommender system were examined and the sensitivity of the rating given by the users are also analyzed. Furthermore the robust PLSA also considered for the work
Stability of matrix factorization for collaborative filtering
We study the stability vis a vis adversarial noise of matrix factorization
algorithm for matrix completion. In particular, our results include: (I) we
bound the gap between the solution matrix of the factorization method and the
ground truth in terms of root mean square error; (II) we treat the matrix
factorization as a subspace fitting problem and analyze the difference between
the solution subspace and the ground truth; (III) we analyze the prediction
error of individual users based on the subspace stability. We apply these
results to the problem of collaborative filtering under manipulator attack,
which leads to useful insights and guidelines for collaborative filtering
system design.Comment: ICML201
Robust Recommender System: A Survey and Future Directions
With the rapid growth of information, recommender systems have become
integral for providing personalized suggestions and overcoming information
overload. However, their practical deployment often encounters "dirty" data,
where noise or malicious information can lead to abnormal recommendations.
Research on improving recommender systems' robustness against such dirty data
has thus gained significant attention. This survey provides a comprehensive
review of recent work on recommender systems' robustness. We first present a
taxonomy to organize current techniques for withstanding malicious attacks and
natural noise. We then explore state-of-the-art methods in each category,
including fraudster detection, adversarial training, certifiable robust
training against malicious attacks, and regularization, purification,
self-supervised learning against natural noise. Additionally, we summarize
evaluation metrics and common datasets used to assess robustness. We discuss
robustness across varying recommendation scenarios and its interplay with other
properties like accuracy, interpretability, privacy, and fairness. Finally, we
delve into open issues and future research directions in this emerging field.
Our goal is to equip readers with a holistic understanding of robust
recommender systems and spotlight pathways for future research and development
Attacking Recommender Systems with Augmented User Profiles
Recommendation Systems (RS) have become an essential part of many online
services. Due to its pivotal role in guiding customers towards purchasing,
there is a natural motivation for unscrupulous parties to spoof RS for profits.
In this paper, we study the shilling attack: a subsistent and profitable attack
where an adversarial party injects a number of user profiles to promote or
demote a target item. Conventional shilling attack models are based on simple
heuristics that can be easily detected, or directly adopt adversarial attack
methods without a special design for RS. Moreover, the study on the attack
impact on deep learning based RS is missing in the literature, making the
effects of shilling attack against real RS doubtful. We present a novel
Augmented Shilling Attack framework (AUSH) and implement it with the idea of
Generative Adversarial Network. AUSH is capable of tailoring attacks against RS
according to budget and complex attack goals, such as targeting a specific user
group. We experimentally show that the attack impact of AUSH is noticeable on a
wide range of RS including both classic and modern deep learning based RS,
while it is virtually undetectable by the state-of-the-art attack detection
model.Comment: CIKM 2020. 10 pages, 2 figure
RecAD: Towards A Unified Library for Recommender Attack and Defense
In recent years, recommender systems have become a ubiquitous part of our
daily lives, while they suffer from a high risk of being attacked due to the
growing commercial and social values. Despite significant research progress in
recommender attack and defense, there is a lack of a widely-recognized
benchmarking standard in the field, leading to unfair performance comparison
and limited credibility of experiments. To address this, we propose RecAD, a
unified library aiming at establishing an open benchmark for recommender attack
and defense. RecAD takes an initial step to set up a unified benchmarking
pipeline for reproducible research by integrating diverse datasets, standard
source codes, hyper-parameter settings, running logs, attack knowledge, attack
budget, and evaluation results. The benchmark is designed to be comprehensive
and sustainable, covering both attack, defense, and evaluation tasks, enabling
more researchers to easily follow and contribute to this promising field. RecAD
will drive more solid and reproducible research on recommender systems attack
and defense, reduce the redundant efforts of researchers, and ultimately
increase the credibility and practical value of recommender attack and defense.
The project is released at https://github.com/gusye1234/recad
Adversarial Item Promotion: Vulnerabilities at the Core of Top-N Recommenders that Use Images to Address Cold Start
E-commerce platforms provide their customers with ranked lists of recommended
items matching the customers' preferences. Merchants on e-commerce platforms
would like their items to appear as high as possible in the top-N of these
ranked lists. In this paper, we demonstrate how unscrupulous merchants can
create item images that artificially promote their products, improving their
rankings. Recommender systems that use images to address the cold start problem
are vulnerable to this security risk. We describe a new type of attack,
Adversarial Item Promotion (AIP), that strikes directly at the core of Top-N
recommenders: the ranking mechanism itself. Existing work on adversarial images
in recommender systems investigates the implications of conventional attacks,
which target deep learning classifiers. In contrast, our AIP attacks are
embedding attacks that seek to push features representations in a way that
fools the ranker (not a classifier) and directly lead to item promotion. We
introduce three AIP attacks insider attack, expert attack, and semantic attack,
which are defined with respect to three successively more realistic attack
models. Our experiments evaluate the danger of these attacks when mounted
against three representative visually-aware recommender algorithms in a
framework that uses images to address cold start. We also evaluate two common
defenses against adversarial images in the classification scenario and show
that these simple defenses do not eliminate the danger of AIP attacks. In sum,
we show that using images to address cold start opens recommender systems to
potential threats with clear practical implications. To facilitate future
research, we release an implementation of our attacks and defenses, which
allows reproduction and extension.Comment: Our code is available at https://github.com/liuzrcc/AI
- …