52 research outputs found
RecAD: Towards A Unified Library for Recommender Attack and Defense
In recent years, recommender systems have become a ubiquitous part of our
daily lives, while they suffer from a high risk of being attacked due to the
growing commercial and social values. Despite significant research progress in
recommender attack and defense, there is a lack of a widely-recognized
benchmarking standard in the field, leading to unfair performance comparison
and limited credibility of experiments. To address this, we propose RecAD, a
unified library aiming at establishing an open benchmark for recommender attack
and defense. RecAD takes an initial step to set up a unified benchmarking
pipeline for reproducible research by integrating diverse datasets, standard
source codes, hyper-parameter settings, running logs, attack knowledge, attack
budget, and evaluation results. The benchmark is designed to be comprehensive
and sustainable, covering both attack, defense, and evaluation tasks, enabling
more researchers to easily follow and contribute to this promising field. RecAD
will drive more solid and reproducible research on recommender systems attack
and defense, reduce the redundant efforts of researchers, and ultimately
increase the credibility and practical value of recommender attack and defense.
The project is released at https://github.com/gusye1234/recad
Analysis of malicious input issues on intelligent systems
Intelligent systems can facilitate decision making and have been widely applied to various domains. The output of intelligent systems relies on the users\u27 input. However, with the development of Web-Based Interface, users can easily provide dishonest input. Therefore, the accuracy of the generated decision will be affected. This dissertation presents three essays to discuss the defense solutions for malicious input into three types of intelligent systems: expert systems, recommender systems, and rating systems. Different methods are proposed in each domain based on the nature of each problem.
The first essay addresses the input distortion issue in expert systems. It develops four methods to distinguish liars from truth-tellers, and redesign the expert systems to control the impact of input distortion by liars. Experimental results show that the proposed methods could lead to the better accuracy or the lower misclassification cost.
The second essay addresses the shilling attack issue in recommender systems. It proposes an integrated Value-based Neighbor Selection (VNS) approach, which aims to select proper neighbors for recommendation systems that maximize the e-retailer\u27s profit while protecting the system from shilling attacks. Simulations are conducted to demonstrate the effectiveness of the proposed method.
The third essay addresses the rating fraud issue in rating systems. It designs a two-phase procedure for rating fraud detection based on the temporal analysis on the rating series. Experiments based on the real-world data are utilized to evaluate the effectiveness of the proposed method
Shilling Black-box Review-based Recommender Systems through Fake Review Generation
Review-Based Recommender Systems (RBRS) have attracted increasing research
interest due to their ability to alleviate well-known cold-start problems. RBRS
utilizes reviews to construct the user and items representations. However, in
this paper, we argue that such a reliance on reviews may instead expose systems
to the risk of being shilled. To explore this possibility, in this paper, we
propose the first generation-based model for shilling attacks against RBRSs.
Specifically, we learn a fake review generator through reinforcement learning,
which maliciously promotes items by forcing prediction shifts after adding
generated reviews to the system. By introducing the auxiliary rewards to
increase text fluency and diversity with the aid of pre-trained language models
and aspect predictors, the generated reviews can be effective for shilling with
high fidelity. Experimental results demonstrate that the proposed framework can
successfully attack three different kinds of RBRSs on the Amazon corpus with
three domains and Yelp corpus. Furthermore, human studies also show that the
generated reviews are fluent and informative. Finally, equipped with Attack
Review Generators (ARGs), RBRSs with adversarial training are much more robust
to malicious reviews
How Fraudster Detection Contributes to Robust Recommendation
The adversarial robustness of recommendation systems under node injection
attacks has received considerable research attention. Recently, a robust
recommendation system GraphRfi was proposed, and it was shown that GraphRfi
could successfully mitigate the effects of injected fake users in the system.
Unfortunately, we demonstrate that GraphRfi is still vulnerable to attacks due
to the supervised nature of its fraudster detection component. Specifically, we
propose a new attack metaC against GraphRfi, and further analyze why GraphRfi
fails under such an attack. Based on the insights we obtained from the
vulnerability analysis, we build a new robust recommendation system PDR by
re-designing the fraudster detection component. Comprehensive experiments show
that our defense approach outperforms other benchmark methods under attacks.
Overall, our research demonstrates an effective framework of integrating
fraudster detection into recommendation to achieve adversarial robustness
Poisoning Attacks against Recommender Systems: A Survey
Modern recommender systems (RS) have seen substantial success, yet they
remain vulnerable to malicious activities, notably poisoning attacks. These
attacks involve injecting malicious data into the training datasets of RS,
thereby compromising their integrity and manipulating recommendation outcomes
for gaining illicit profits. This survey paper provides a systematic and
up-to-date review of the research landscape on Poisoning Attacks against
Recommendation (PAR). A novel and comprehensive taxonomy is proposed,
categorizing existing PAR methodologies into three distinct categories:
Component-Specific, Goal-Driven, and Capability Probing. For each category, we
discuss its mechanism in detail, along with associated methods. Furthermore,
this paper highlights potential future research avenues in this domain.
Additionally, to facilitate and benchmark the empirical comparison of PAR, we
introduce an open-source library, ARLib, which encompasses a comprehensive
collection of PAR models and common datasets. The library is released at
https://github.com/CoderWZW/ARLib.Comment: 9 pages,3 figure
Securing Recommender System via Cooperative Training
Recommender systems are often susceptible to well-crafted fake profiles,
leading to biased recommendations. Among existing defense methods,
data-processing-based methods inevitably exclude normal samples, while
model-based methods struggle to enjoy both generalization and robustness. To
this end, we suggest integrating data processing and the robust model to
propose a general framework, Triple Cooperative Defense (TCD), which employs
three cooperative models that mutually enhance data and thereby improve
recommendation robustness. Furthermore, Considering that existing attacks
struggle to balance bi-level optimization and efficiency, we revisit poisoning
attacks in recommender systems and introduce an efficient attack strategy,
Co-training Attack (Co-Attack), which cooperatively optimizes the attack
optimization and model training, considering the bi-level setting while
maintaining attack efficiency. Moreover, we reveal a potential reason for the
insufficient threat of existing attacks is their default assumption of
optimizing attacks in undefended scenarios. This overly optimistic setting
limits the potential of attacks. Consequently, we put forth a Game-based
Co-training Attack (GCoAttack), which frames the proposed CoAttack and TCD as a
game-theoretic process, thoroughly exploring CoAttack's attack potential in the
cooperative training of attack and defense. Extensive experiments on three real
datasets demonstrate TCD's superiority in enhancing model robustness.
Additionally, we verify that the two proposed attack strategies significantly
outperform existing attacks, with game-based GCoAttack posing a greater
poisoning threat than CoAttack.Comment: arXiv admin note: text overlap with arXiv:2210.1376
- …