164 research outputs found
Chiron: A Robust Recommendation System with Graph Regularizer
Recommendation systems have been widely used by commercial service providers
for giving suggestions to users. Collaborative filtering (CF) systems, one of
the most popular recommendation systems, utilize the history of behaviors of
the aggregate user-base to provide individual recommendations and are effective
when almost all users faithfully express their opinions. However, they are
vulnerable to malicious users biasing their inputs in order to change the
overall ratings of a specific group of items. CF systems largely fall into two
categories - neighborhood-based and (matrix) factorization-based - and the
presence of adversarial input can influence recommendations in both categories,
leading to instabilities in estimation and prediction. Although the robustness
of different collaborative filtering algorithms has been extensively studied,
designing an efficient system that is immune to manipulation remains a
significant challenge. In this work we propose a novel "hybrid" recommendation
system with an adaptive graph-based user/item similarity-regularization -
"Chiron". Chiron ties the performance benefits of dimensionality reduction
(through factorization) with the advantage of neighborhood clustering (through
regularization). We demonstrate, using extensive comparative experiments, that
Chiron is resistant to manipulation by large and lethal attacks
Securing Recommender System via Cooperative Training
Recommender systems are often susceptible to well-crafted fake profiles,
leading to biased recommendations. Among existing defense methods,
data-processing-based methods inevitably exclude normal samples, while
model-based methods struggle to enjoy both generalization and robustness. To
this end, we suggest integrating data processing and the robust model to
propose a general framework, Triple Cooperative Defense (TCD), which employs
three cooperative models that mutually enhance data and thereby improve
recommendation robustness. Furthermore, Considering that existing attacks
struggle to balance bi-level optimization and efficiency, we revisit poisoning
attacks in recommender systems and introduce an efficient attack strategy,
Co-training Attack (Co-Attack), which cooperatively optimizes the attack
optimization and model training, considering the bi-level setting while
maintaining attack efficiency. Moreover, we reveal a potential reason for the
insufficient threat of existing attacks is their default assumption of
optimizing attacks in undefended scenarios. This overly optimistic setting
limits the potential of attacks. Consequently, we put forth a Game-based
Co-training Attack (GCoAttack), which frames the proposed CoAttack and TCD as a
game-theoretic process, thoroughly exploring CoAttack's attack potential in the
cooperative training of attack and defense. Extensive experiments on three real
datasets demonstrate TCD's superiority in enhancing model robustness.
Additionally, we verify that the two proposed attack strategies significantly
outperform existing attacks, with game-based GCoAttack posing a greater
poisoning threat than CoAttack.Comment: arXiv admin note: text overlap with arXiv:2210.1376
Analysis of malicious input issues on intelligent systems
Intelligent systems can facilitate decision making and have been widely applied to various domains. The output of intelligent systems relies on the users\u27 input. However, with the development of Web-Based Interface, users can easily provide dishonest input. Therefore, the accuracy of the generated decision will be affected. This dissertation presents three essays to discuss the defense solutions for malicious input into three types of intelligent systems: expert systems, recommender systems, and rating systems. Different methods are proposed in each domain based on the nature of each problem.
The first essay addresses the input distortion issue in expert systems. It develops four methods to distinguish liars from truth-tellers, and redesign the expert systems to control the impact of input distortion by liars. Experimental results show that the proposed methods could lead to the better accuracy or the lower misclassification cost.
The second essay addresses the shilling attack issue in recommender systems. It proposes an integrated Value-based Neighbor Selection (VNS) approach, which aims to select proper neighbors for recommendation systems that maximize the e-retailer\u27s profit while protecting the system from shilling attacks. Simulations are conducted to demonstrate the effectiveness of the proposed method.
The third essay addresses the rating fraud issue in rating systems. It designs a two-phase procedure for rating fraud detection based on the temporal analysis on the rating series. Experiments based on the real-world data are utilized to evaluate the effectiveness of the proposed method
Neural Collaborative Filtering Classification Model to Obtain Prediction Reliabilities
Neural collaborative filtering is the state of art field in the recommender systems area; it provides some models that obtain accurate predictions and recommendations. These models are regression-based, and they just
return rating predictions. This paper proposes the use of a classification-based approach, returning both rating predictions and their reliabilities. The extra information (prediction reliabilities) can be used in a variety of
relevant collaborative filtering areas such as detection of shilling attacks, recommendations explanation or navigational tools to show users and items dependences. Additionally, recommendation reliabilities can be
gracefully provided to users: “probably you will like this film”, “almost certainly you will like this song”, etc. This paper provides the proposed neural architecture; it also tests that the quality of its recommendation results is as good as the state of art baselines. Remarkably, individual rating predictions are improved by using the proposed architecture compared to baselines. Experiments have been performed making use of four popular public datasets, showing generalizable quality results. Overall, the proposed architecture improves individual rating predictions quality, maintains recommendation results and opens the doors to a set of relevant collaborative filtering fields
Attacking Recommender Systems with Augmented User Profiles
Recommendation Systems (RS) have become an essential part of many online
services. Due to its pivotal role in guiding customers towards purchasing,
there is a natural motivation for unscrupulous parties to spoof RS for profits.
In this paper, we study the shilling attack: a subsistent and profitable attack
where an adversarial party injects a number of user profiles to promote or
demote a target item. Conventional shilling attack models are based on simple
heuristics that can be easily detected, or directly adopt adversarial attack
methods without a special design for RS. Moreover, the study on the attack
impact on deep learning based RS is missing in the literature, making the
effects of shilling attack against real RS doubtful. We present a novel
Augmented Shilling Attack framework (AUSH) and implement it with the idea of
Generative Adversarial Network. AUSH is capable of tailoring attacks against RS
according to budget and complex attack goals, such as targeting a specific user
group. We experimentally show that the attack impact of AUSH is noticeable on a
wide range of RS including both classic and modern deep learning based RS,
while it is virtually undetectable by the state-of-the-art attack detection
model.Comment: CIKM 2020. 10 pages, 2 figure
- …