318 research outputs found
Data Poisoning Attacks on Linked Data with Graph Regularization
abstract: Social media has become the norm of everyone for communication. The usage of social media has increased exponentially in the last decade. The myriads of Social media services such as Facebook, Twitter, Snapchat, and Instagram etc allow people to connect with their friends, and followers freely. The attackers who try to take advantage of this situation has also increased at an exponential rate. Every social media service has its own recommender systems and user profiling algorithms. These algorithms use users current information to make different recommendations. Often the data that is formed from social media services is Linked data as each item/user is usually linked with other users/items. Recommender systems due to their ubiquitous and prominent nature are prone to several forms of attacks. One of the major form of attacks is poisoning the training set data. As recommender systems use current user/item information as the training set to make recommendations, the attacker tries to modify the training set in such a way that the recommender system would benefit the attacker or give incorrect recommendations and hence failing in its basic functionality. Most existing training set attack algorithms work with ``flat" attribute-value data which is typically assumed to be independent and identically distributed (i.i.d.). However, the i.i.d. assumption does not hold for social media data since it is inherently linked as described above. Usage of user-similarity with Graph Regularizer in morphing the training data produces best results to attacker. This thesis proves the same by demonstrating with experiments on Collaborative Filtering with multiple datasets.Dissertation/ThesisMasters Thesis Computer Science 201
FedRecAttack: Model Poisoning Attack to Federated Recommendation
Federated Recommendation (FR) has received considerable popularity and
attention in the past few years. In FR, for each user, its feature vector and
interaction data are kept locally on its own client thus are private to others.
Without the access to above information, most existing poisoning attacks
against recommender systems or federated learning lose validity. Benifiting
from this characteristic, FR is commonly considered fairly secured. However, we
argue that there is still possible and necessary security improvement could be
made in FR. To prove our opinion, in this paper we present FedRecAttack, a
model poisoning attack to FR aiming to raise the exposure ratio of target
items. In most recommendation scenarios, apart from private user-item
interactions (e.g., clicks, watches and purchases), some interactions are
public (e.g., likes, follows and comments). Motivated by this point, in
FedRecAttack we make use of the public interactions to approximate users'
feature vectors, thereby attacker can generate poisoned gradients accordingly
and control malicious users to upload the poisoned gradients in a well-designed
way. To evaluate the effectiveness and side effects of FedRecAttack, we conduct
extensive experiments on three real-world datasets of different sizes from two
completely different scenarios. Experimental results demonstrate that our
proposed FedRecAttack achieves the state-of-the-art effectiveness while its
side effects are negligible. Moreover, even with small proportion (3%) of
malicious users and small proportion (1%) of public interactions, FedRecAttack
remains highly effective, which reveals that FR is more vulnerable to attack
than people commonly considered.Comment: This paper has been accepted by IEEE International Conference on Data
Engineering 2022 (Second Research Round
Manipulating Federated Recommender Systems: Poisoning with Synthetic Users and Its Countermeasures
Federated Recommender Systems (FedRecs) are considered privacy-preserving
techniques to collaboratively learn a recommendation model without sharing user
data. Since all participants can directly influence the systems by uploading
gradients, FedRecs are vulnerable to poisoning attacks of malicious clients.
However, most existing poisoning attacks on FedRecs are either based on some
prior knowledge or with less effectiveness. To reveal the real vulnerability of
FedRecs, in this paper, we present a new poisoning attack method to manipulate
target items' ranks and exposure rates effectively in the top-
recommendation without relying on any prior knowledge. Specifically, our attack
manipulates target items' exposure rate by a group of synthetic malicious users
who upload poisoned gradients considering target items' alternative products.
We conduct extensive experiments with two widely used FedRecs (Fed-NCF and
Fed-LightGCN) on two real-world recommendation datasets. The experimental
results show that our attack can significantly improve the exposure rate of
unpopular target items with extremely fewer malicious users and fewer global
epochs than state-of-the-art attacks. In addition to disclosing the security
hole, we design a novel countermeasure for poisoning attacks on FedRecs.
Specifically, we propose a hierarchical gradient clipping with sparsified
updating to defend against existing poisoning attacks. The empirical results
demonstrate that the proposed defending mechanism improves the robustness of
FedRecs.Comment: This paper has been accepted by SIGIR202
How Fraudster Detection Contributes to Robust Recommendation
The adversarial robustness of recommendation systems under node injection
attacks has received considerable research attention. Recently, a robust
recommendation system GraphRfi was proposed, and it was shown that GraphRfi
could successfully mitigate the effects of injected fake users in the system.
Unfortunately, we demonstrate that GraphRfi is still vulnerable to attacks due
to the supervised nature of its fraudster detection component. Specifically, we
propose a new attack metaC against GraphRfi, and further analyze why GraphRfi
fails under such an attack. Based on the insights we obtained from the
vulnerability analysis, we build a new robust recommendation system PDR by
re-designing the fraudster detection component. Comprehensive experiments show
that our defense approach outperforms other benchmark methods under attacks.
Overall, our research demonstrates an effective framework of integrating
fraudster detection into recommendation to achieve adversarial robustness
- …