132 research outputs found
Adversarial Item Promotion: Vulnerabilities at the Core of Top-N Recommenders that Use Images to Address Cold Start
E-commerce platforms provide their customers with ranked lists of recommended
items matching the customers' preferences. Merchants on e-commerce platforms
would like their items to appear as high as possible in the top-N of these
ranked lists. In this paper, we demonstrate how unscrupulous merchants can
create item images that artificially promote their products, improving their
rankings. Recommender systems that use images to address the cold start problem
are vulnerable to this security risk. We describe a new type of attack,
Adversarial Item Promotion (AIP), that strikes directly at the core of Top-N
recommenders: the ranking mechanism itself. Existing work on adversarial images
in recommender systems investigates the implications of conventional attacks,
which target deep learning classifiers. In contrast, our AIP attacks are
embedding attacks that seek to push features representations in a way that
fools the ranker (not a classifier) and directly lead to item promotion. We
introduce three AIP attacks insider attack, expert attack, and semantic attack,
which are defined with respect to three successively more realistic attack
models. Our experiments evaluate the danger of these attacks when mounted
against three representative visually-aware recommender algorithms in a
framework that uses images to address cold start. We also evaluate two common
defenses against adversarial images in the classification scenario and show
that these simple defenses do not eliminate the danger of AIP attacks. In sum,
we show that using images to address cold start opens recommender systems to
potential threats with clear practical implications. To facilitate future
research, we release an implementation of our attacks and defenses, which
allows reproduction and extension.Comment: Our code is available at https://github.com/liuzrcc/AI
Defending Substitution-Based Profile Pollution Attacks on Sequential Recommenders
While sequential recommender systems achieve significant improvements on
capturing user dynamics, we argue that sequential recommenders are vulnerable
against substitution-based profile pollution attacks. To demonstrate our
hypothesis, we propose a substitution-based adversarial attack algorithm, which
modifies the input sequence by selecting certain vulnerable elements and
substituting them with adversarial items. In both untargeted and targeted
attack scenarios, we observe significant performance deterioration using the
proposed profile pollution algorithm. Motivated by such observations, we design
an efficient adversarial defense method called Dirichlet neighborhood sampling.
Specifically, we sample item embeddings from a convex hull constructed by
multi-hop neighbors to replace the original items in input sequences. During
sampling, a Dirichlet distribution is used to approximate the probability
distribution in the neighborhood such that the recommender learns to combat
local perturbations. Additionally, we design an adversarial training method
tailored for sequential recommender systems. In particular, we represent
selected items with one-hot encodings and perform gradient ascent on the
encodings to search for the worst case linear combination of item embeddings in
training. As such, the embedding function learns robust item representations
and the trained recommender is resistant to test-time adversarial examples.
Extensive experiments show the effectiveness of both our attack and defense
methods, which consistently outperform baselines by a significant margin across
model architectures and datasets.Comment: Accepted to RecSys 202
How Fraudster Detection Contributes to Robust Recommendation
The adversarial robustness of recommendation systems under node injection
attacks has received considerable research attention. Recently, a robust
recommendation system GraphRfi was proposed, and it was shown that GraphRfi
could successfully mitigate the effects of injected fake users in the system.
Unfortunately, we demonstrate that GraphRfi is still vulnerable to attacks due
to the supervised nature of its fraudster detection component. Specifically, we
propose a new attack metaC against GraphRfi, and further analyze why GraphRfi
fails under such an attack. Based on the insights we obtained from the
vulnerability analysis, we build a new robust recommendation system PDR by
re-designing the fraudster detection component. Comprehensive experiments show
that our defense approach outperforms other benchmark methods under attacks.
Overall, our research demonstrates an effective framework of integrating
fraudster detection into recommendation to achieve adversarial robustness
Single-User Injection for Invisible Shilling Attack against Recommender Systems
Recommendation systems (RS) are crucial for alleviating the information
overload problem. Due to its pivotal role in guiding users to make decisions,
unscrupulous parties are lured to launch attacks against RS to affect the
decisions of normal users and gain illegal profits. Among various types of
attacks, shilling attack is one of the most subsistent and profitable attacks.
In shilling attack, an adversarial party injects a number of well-designed fake
user profiles into the system to mislead RS so that the attack goal can be
achieved. Although existing shilling attack methods have achieved promising
results, they all adopt the attack paradigm of multi-user injection, where some
fake user profiles are required. This paper provides the first study of
shilling attack in an extremely limited scenario: only one fake user profile is
injected into the victim RS to launch shilling attacks (i.e., single-user
injection). We propose a novel single-user injection method SUI-Attack for
invisible shilling attack. SUI-Attack is a graph based attack method that
models shilling attack as a node generation task over the user-item bipartite
graph of the victim RS, and it constructs the fake user profile by generating
user features and edges that link the fake user to items. Extensive experiments
demonstrate that SUI-Attack can achieve promising attack results in single-user
injection. In addition to its attack power, SUI-Attack increases the
stealthiness of shilling attack and reduces the risk of being detected. We
provide our implementation at: https://github.com/KDEGroup/SUI-Attack.Comment: CIKM 2023. 10 pages, 5 figure
Revisiting Adversarially Learned Injection Attacks Against Recommender Systems
Recommender systems play an important role in modern information and
e-commerce applications. While increasing research is dedicated to improving
the relevance and diversity of the recommendations, the potential risks of
state-of-the-art recommendation models are under-explored, that is, these
models could be subject to attacks from malicious third parties, through
injecting fake user interactions to achieve their purposes. This paper revisits
the adversarially-learned injection attack problem, where the injected fake
user `behaviors' are learned locally by the attackers with their own model --
one that is potentially different from the model under attack, but shares
similar properties to allow attack transfer. We found that most existing works
in literature suffer from two major limitations: (1) they do not solve the
optimization problem precisely, making the attack less harmful than it could
be, (2) they assume perfect knowledge for the attack, causing the lack of
understanding for realistic attack capabilities. We demonstrate that the exact
solution for generating fake users as an optimization problem could lead to a
much larger impact. Our experiments on a real-world dataset reveal important
properties of the attack, including attack transferability and its limitations.
These findings can inspire useful defensive methods against this possible
existing attack.Comment: Accepted at Recsys 2
Attacking Recommender Systems with Augmented User Profiles
Recommendation Systems (RS) have become an essential part of many online
services. Due to its pivotal role in guiding customers towards purchasing,
there is a natural motivation for unscrupulous parties to spoof RS for profits.
In this paper, we study the shilling attack: a subsistent and profitable attack
where an adversarial party injects a number of user profiles to promote or
demote a target item. Conventional shilling attack models are based on simple
heuristics that can be easily detected, or directly adopt adversarial attack
methods without a special design for RS. Moreover, the study on the attack
impact on deep learning based RS is missing in the literature, making the
effects of shilling attack against real RS doubtful. We present a novel
Augmented Shilling Attack framework (AUSH) and implement it with the idea of
Generative Adversarial Network. AUSH is capable of tailoring attacks against RS
according to budget and complex attack goals, such as targeting a specific user
group. We experimentally show that the attack impact of AUSH is noticeable on a
wide range of RS including both classic and modern deep learning based RS,
while it is virtually undetectable by the state-of-the-art attack detection
model.Comment: CIKM 2020. 10 pages, 2 figure
- …