116 research outputs found
Chiron: A Robust Recommendation System with Graph Regularizer
Recommendation systems have been widely used by commercial service providers
for giving suggestions to users. Collaborative filtering (CF) systems, one of
the most popular recommendation systems, utilize the history of behaviors of
the aggregate user-base to provide individual recommendations and are effective
when almost all users faithfully express their opinions. However, they are
vulnerable to malicious users biasing their inputs in order to change the
overall ratings of a specific group of items. CF systems largely fall into two
categories - neighborhood-based and (matrix) factorization-based - and the
presence of adversarial input can influence recommendations in both categories,
leading to instabilities in estimation and prediction. Although the robustness
of different collaborative filtering algorithms has been extensively studied,
designing an efficient system that is immune to manipulation remains a
significant challenge. In this work we propose a novel "hybrid" recommendation
system with an adaptive graph-based user/item similarity-regularization -
"Chiron". Chiron ties the performance benefits of dimensionality reduction
(through factorization) with the advantage of neighborhood clustering (through
regularization). We demonstrate, using extensive comparative experiments, that
Chiron is resistant to manipulation by large and lethal attacks
Method For Detecting Shilling Attacks In E-commerce Systems Using Weighted Temporal Rules
The problem of shilling attacks detecting in e-commerce systems is considered. The purpose of such attacks is to artificially change the rating of individual goods or services by users in order to increase their sales. A method for detecting shilling attacks based on a comparison of weighted temporal rules for the processes of selecting objects with explicit and implicit feedback from users is proposed. Implicit dependencies are specified through the purchase of goods and services. Explicit feedback is formed through the ratings of these products. The temporal rules are used to describe hidden relationships between the choices of user groups at two consecutive time intervals. The method includes the construction of temporal rules for explicit and implicit feedback, their comparison, as well as the formation of an ordered subset of temporal rules that capture potential shilling attacks. The method imposes restrictions on the input data on sales and ratings, which must be ordered by time or have timestamps. This method can be used in combination with other approaches to detecting shilling attacks. Integration of approaches allows to refine and supplement the existing attack patterns, taking into account the latest changes in user priorities
Single-User Injection for Invisible Shilling Attack against Recommender Systems
Recommendation systems (RS) are crucial for alleviating the information
overload problem. Due to its pivotal role in guiding users to make decisions,
unscrupulous parties are lured to launch attacks against RS to affect the
decisions of normal users and gain illegal profits. Among various types of
attacks, shilling attack is one of the most subsistent and profitable attacks.
In shilling attack, an adversarial party injects a number of well-designed fake
user profiles into the system to mislead RS so that the attack goal can be
achieved. Although existing shilling attack methods have achieved promising
results, they all adopt the attack paradigm of multi-user injection, where some
fake user profiles are required. This paper provides the first study of
shilling attack in an extremely limited scenario: only one fake user profile is
injected into the victim RS to launch shilling attacks (i.e., single-user
injection). We propose a novel single-user injection method SUI-Attack for
invisible shilling attack. SUI-Attack is a graph based attack method that
models shilling attack as a node generation task over the user-item bipartite
graph of the victim RS, and it constructs the fake user profile by generating
user features and edges that link the fake user to items. Extensive experiments
demonstrate that SUI-Attack can achieve promising attack results in single-user
injection. In addition to its attack power, SUI-Attack increases the
stealthiness of shilling attack and reduces the risk of being detected. We
provide our implementation at: https://github.com/KDEGroup/SUI-Attack.Comment: CIKM 2023. 10 pages, 5 figure
RecAD: Towards A Unified Library for Recommender Attack and Defense
In recent years, recommender systems have become a ubiquitous part of our
daily lives, while they suffer from a high risk of being attacked due to the
growing commercial and social values. Despite significant research progress in
recommender attack and defense, there is a lack of a widely-recognized
benchmarking standard in the field, leading to unfair performance comparison
and limited credibility of experiments. To address this, we propose RecAD, a
unified library aiming at establishing an open benchmark for recommender attack
and defense. RecAD takes an initial step to set up a unified benchmarking
pipeline for reproducible research by integrating diverse datasets, standard
source codes, hyper-parameter settings, running logs, attack knowledge, attack
budget, and evaluation results. The benchmark is designed to be comprehensive
and sustainable, covering both attack, defense, and evaluation tasks, enabling
more researchers to easily follow and contribute to this promising field. RecAD
will drive more solid and reproducible research on recommender systems attack
and defense, reduce the redundant efforts of researchers, and ultimately
increase the credibility and practical value of recommender attack and defense.
The project is released at https://github.com/gusye1234/recad
Robust Recommender System: A Survey and Future Directions
With the rapid growth of information, recommender systems have become
integral for providing personalized suggestions and overcoming information
overload. However, their practical deployment often encounters "dirty" data,
where noise or malicious information can lead to abnormal recommendations.
Research on improving recommender systems' robustness against such dirty data
has thus gained significant attention. This survey provides a comprehensive
review of recent work on recommender systems' robustness. We first present a
taxonomy to organize current techniques for withstanding malicious attacks and
natural noise. We then explore state-of-the-art methods in each category,
including fraudster detection, adversarial training, certifiable robust
training against malicious attacks, and regularization, purification,
self-supervised learning against natural noise. Additionally, we summarize
evaluation metrics and common datasets used to assess robustness. We discuss
robustness across varying recommendation scenarios and its interplay with other
properties like accuracy, interpretability, privacy, and fairness. Finally, we
delve into open issues and future research directions in this emerging field.
Our goal is to equip readers with a holistic understanding of robust
recommender systems and spotlight pathways for future research and development
Shilling Black-box Review-based Recommender Systems through Fake Review Generation
Review-Based Recommender Systems (RBRS) have attracted increasing research
interest due to their ability to alleviate well-known cold-start problems. RBRS
utilizes reviews to construct the user and items representations. However, in
this paper, we argue that such a reliance on reviews may instead expose systems
to the risk of being shilled. To explore this possibility, in this paper, we
propose the first generation-based model for shilling attacks against RBRSs.
Specifically, we learn a fake review generator through reinforcement learning,
which maliciously promotes items by forcing prediction shifts after adding
generated reviews to the system. By introducing the auxiliary rewards to
increase text fluency and diversity with the aid of pre-trained language models
and aspect predictors, the generated reviews can be effective for shilling with
high fidelity. Experimental results demonstrate that the proposed framework can
successfully attack three different kinds of RBRSs on the Amazon corpus with
three domains and Yelp corpus. Furthermore, human studies also show that the
generated reviews are fluent and informative. Finally, equipped with Attack
Review Generators (ARGs), RBRSs with adversarial training are much more robust
to malicious reviews
How Fraudster Detection Contributes to Robust Recommendation
The adversarial robustness of recommendation systems under node injection
attacks has received considerable research attention. Recently, a robust
recommendation system GraphRfi was proposed, and it was shown that GraphRfi
could successfully mitigate the effects of injected fake users in the system.
Unfortunately, we demonstrate that GraphRfi is still vulnerable to attacks due
to the supervised nature of its fraudster detection component. Specifically, we
propose a new attack metaC against GraphRfi, and further analyze why GraphRfi
fails under such an attack. Based on the insights we obtained from the
vulnerability analysis, we build a new robust recommendation system PDR by
re-designing the fraudster detection component. Comprehensive experiments show
that our defense approach outperforms other benchmark methods under attacks.
Overall, our research demonstrates an effective framework of integrating
fraudster detection into recommendation to achieve adversarial robustness
- …