27 research outputs found
Adversarial Training Towards Robust Multimedia Recommender System
With the prevalence of multimedia content on the Web, developing recommender
solutions that can effectively leverage the rich signal in multimedia data is
in urgent need. Owing to the success of deep neural networks in representation
learning, recent advance on multimedia recommendation has largely focused on
exploring deep learning methods to improve the recommendation accuracy. To
date, however, there has been little effort to investigate the robustness of
multimedia representation and its impact on the performance of multimedia
recommendation.
In this paper, we shed light on the robustness of multimedia recommender
system. Using the state-of-the-art recommendation framework and deep image
features, we demonstrate that the overall system is not robust, such that a
small (but purposeful) perturbation on the input image will severely decrease
the recommendation accuracy. This implies the possible weakness of multimedia
recommender system in predicting user preference, and more importantly, the
potential of improvement by enhancing its robustness. To this end, we propose a
novel solution named Adversarial Multimedia Recommendation (AMR), which can
lead to a more robust multimedia recommender model by using adversarial
learning. The idea is to train the model to defend an adversary, which adds
perturbations to the target image with the purpose of decreasing the model's
accuracy. We conduct experiments on two representative multimedia
recommendation tasks, namely, image recommendation and visually-aware product
recommendation. Extensive results verify the positive effect of adversarial
learning and demonstrate the effectiveness of our AMR method. Source codes are
available in https://github.com/duxy-me/AMR.Comment: TKD
A Novel Privacy-Preserved Recommender System Framework based on Federated Learning
Recommender System (RS) is currently an effective way to solve information
overload. To meet users' next click behavior, RS needs to collect users'
personal information and behavior to achieve a comprehensive and profound user
preference perception. However, these centrally collected data are
privacy-sensitive, and any leakage may cause severe problems to both users and
service providers. This paper proposed a novel privacy-preserved recommender
system framework (PPRSF), through the application of federated learning
paradigm, to enable the recommendation algorithm to be trained and carry out
inference without centrally collecting users' private data. The PPRSF not only
able to reduces the privacy leakage risk, satisfies legal and regulatory
requirements but also allows various recommendation algorithms to be applied
Investigating the Robustness of Sequential Recommender Systems Against Training Data Perturbations: an Empirical Study
Sequential Recommender Systems (SRSs) have been widely used to model user
behavior over time, but their robustness in the face of perturbations to
training data is a critical issue. In this paper, we conduct an empirical study
to investigate the effects of removing items at different positions within a
temporally ordered sequence. We evaluate two different SRS models on multiple
datasets, measuring their performance using Normalized Discounted Cumulative
Gain (NDCG) and Rank Sensitivity List metrics. Our results demonstrate that
removing items at the end of the sequence significantly impacts performance,
with NDCG decreasing up to 60\%, while removing items from the beginning or
middle has no significant effect. These findings highlight the importance of
considering the position of the perturbed items in the training data and shall
inform the design of more robust SRSs
Recommended from our members
Solving the stability-accuracy-diversity dilemma of recommender systems
Recommender systems are of great significance in predicting the potential interesting items based on the target user’s historical selections. However, the recommendation list for a specific user has been found changing vastly when the system changes, due to the unstable quantification of item similarities, which is defined as the recommendation stability problem. To improve the similarity stability and recommendation stability is crucial for the user experience enhancement and the better understanding of user interests. While the stability as well as accuracy of recommendation could be guaranteed by recommending only popular items, studies have been addressing the necessity of diversity which requires the system to recommend unpopular items. By ranking the similarities in terms of stability and considering only the most stable ones, we present a top-n-stability method based on the Heat Conduction algorithm (denoted as TNS-HC henceforth) for solving the stability-accuracy-diversity dilemma. Experiments on four benchmark data sets indicate that the TNS-HC algorithm could significantly improve the recommendation stability and accuracy simultaneously and still retain the high-diversity nature of the Heat Conduction algorithm. Furthermore, we compare the performance of the TNS-HC algorithm with a number of benchmark recommendation algorithms. The result suggests that the TNS-HC algorithm is more efficient in solving the stability-accuracy-diversity triple dilemma of
recommender systems
Adversarial Item Promotion: Vulnerabilities at the Core of Top-N Recommenders that Use Images to Address Cold Start
E-commerce platforms provide their customers with ranked lists of recommended
items matching the customers' preferences. Merchants on e-commerce platforms
would like their items to appear as high as possible in the top-N of these
ranked lists. In this paper, we demonstrate how unscrupulous merchants can
create item images that artificially promote their products, improving their
rankings. Recommender systems that use images to address the cold start problem
are vulnerable to this security risk. We describe a new type of attack,
Adversarial Item Promotion (AIP), that strikes directly at the core of Top-N
recommenders: the ranking mechanism itself. Existing work on adversarial images
in recommender systems investigates the implications of conventional attacks,
which target deep learning classifiers. In contrast, our AIP attacks are
embedding attacks that seek to push features representations in a way that
fools the ranker (not a classifier) and directly lead to item promotion. We
introduce three AIP attacks insider attack, expert attack, and semantic attack,
which are defined with respect to three successively more realistic attack
models. Our experiments evaluate the danger of these attacks when mounted
against three representative visually-aware recommender algorithms in a
framework that uses images to address cold start. We also evaluate two common
defenses against adversarial images in the classification scenario and show
that these simple defenses do not eliminate the danger of AIP attacks. In sum,
we show that using images to address cold start opens recommender systems to
potential threats with clear practical implications. To facilitate future
research, we release an implementation of our attacks and defenses, which
allows reproduction and extension.Comment: Our code is available at https://github.com/liuzrcc/AI
Revisiting Adversarially Learned Injection Attacks Against Recommender Systems
Recommender systems play an important role in modern information and
e-commerce applications. While increasing research is dedicated to improving
the relevance and diversity of the recommendations, the potential risks of
state-of-the-art recommendation models are under-explored, that is, these
models could be subject to attacks from malicious third parties, through
injecting fake user interactions to achieve their purposes. This paper revisits
the adversarially-learned injection attack problem, where the injected fake
user `behaviors' are learned locally by the attackers with their own model --
one that is potentially different from the model under attack, but shares
similar properties to allow attack transfer. We found that most existing works
in literature suffer from two major limitations: (1) they do not solve the
optimization problem precisely, making the attack less harmful than it could
be, (2) they assume perfect knowledge for the attack, causing the lack of
understanding for realistic attack capabilities. We demonstrate that the exact
solution for generating fake users as an optimization problem could lead to a
much larger impact. Our experiments on a real-world dataset reveal important
properties of the attack, including attack transferability and its limitations.
These findings can inspire useful defensive methods against this possible
existing attack.Comment: Accepted at Recsys 2