4,599 research outputs found
Privacy-Aware Recommendation with Private-Attribute Protection using Adversarial Learning
Recommendation is one of the critical applications that helps users find
information relevant to their interests. However, a malicious attacker can
infer users' private information via recommendations. Prior work obfuscates
user-item data before sharing it with recommendation system. This approach does
not explicitly address the quality of recommendation while performing data
obfuscation. Moreover, it cannot protect users against private-attribute
inference attacks based on recommendations. This work is the first attempt to
build a Recommendation with Attribute Protection (RAP) model which
simultaneously recommends relevant items and counters private-attribute
inference attacks. The key idea of our approach is to formulate this problem as
an adversarial learning problem with two main components: the private attribute
inference attacker, and the Bayesian personalized recommender. The attacker
seeks to infer users' private-attribute information according to their items
list and recommendations. The recommender aims to extract users' interests
while employing the attacker to regularize the recommendation process.
Experiments show that the proposed model both preserves the quality of
recommendation service and protects users against private-attribute inference
attacks.Comment: The Thirteenth ACM International Conference on Web Search and Data
Mining (WSDM 2020
Privacy Intelligence: A Survey on Image Sharing on Online Social Networks
Image sharing on online social networks (OSNs) has become an indispensable
part of daily social activities, but it has also led to an increased risk of
privacy invasion. The recent image leaks from popular OSN services and the
abuse of personal photos using advanced algorithms (e.g. DeepFake) have
prompted the public to rethink individual privacy needs when sharing images on
OSNs. However, OSN image sharing itself is relatively complicated, and systems
currently in place to manage privacy in practice are labor-intensive yet fail
to provide personalized, accurate and flexible privacy protection. As a result,
an more intelligent environment for privacy-friendly OSN image sharing is in
demand. To fill the gap, we contribute a systematic survey of 'privacy
intelligence' solutions that target modern privacy issues related to OSN image
sharing. Specifically, we present a high-level analysis framework based on the
entire lifecycle of OSN image sharing to address the various privacy issues and
solutions facing this interdisciplinary field. The framework is divided into
three main stages: local management, online management and social experience.
At each stage, we identify typical sharing-related user behaviors, the privacy
issues generated by those behaviors, and review representative intelligent
solutions. The resulting analysis describes an intelligent privacy-enhancing
chain for closed-loop privacy management. We also discuss the challenges and
future directions existing at each stage, as well as in publicly available
datasets.Comment: 32 pages, 9 figures. Under revie
A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy, Robustness, Fairness, and Explainability
Graph Neural Networks (GNNs) have made rapid developments in the recent
years. Due to their great ability in modeling graph-structured data, GNNs are
vastly used in various applications, including high-stakes scenarios such as
financial analysis, traffic predictions, and drug discovery. Despite their
great potential in benefiting humans in the real world, recent study shows that
GNNs can leak private information, are vulnerable to adversarial attacks, can
inherit and magnify societal bias from training data and lack interpretability,
which have risk of causing unintentional harm to the users and society. For
example, existing works demonstrate that attackers can fool the GNNs to give
the outcome they desire with unnoticeable perturbation on training graph. GNNs
trained on social networks may embed the discrimination in their decision
process, strengthening the undesirable societal bias. Consequently, trustworthy
GNNs in various aspects are emerging to prevent the harm from GNN models and
increase the users' trust in GNNs. In this paper, we give a comprehensive
survey of GNNs in the computational aspects of privacy, robustness, fairness,
and explainability. For each aspect, we give the taxonomy of the related
methods and formulate the general frameworks for the multiple categories of
trustworthy GNNs. We also discuss the future research directions of each aspect
and connections between these aspects to help achieve trustworthiness
Making Users Indistinguishable: Attribute-wise Unlearning in Recommender Systems
With the growing privacy concerns in recommender systems, recommendation
unlearning, i.e., forgetting the impact of specific learned targets, is getting
increasing attention. Existing studies predominantly use training data, i.e.,
model inputs, as the unlearning target. However, we find that attackers can
extract private information, i.e., gender, race, and age, from a trained model
even if it has not been explicitly encountered during training. We name this
unseen information as attribute and treat it as the unlearning target. To
protect the sensitive attribute of users, Attribute Unlearning (AU) aims to
degrade attacking performance and make target attributes indistinguishable. In
this paper, we focus on a strict but practical setting of AU, namely
Post-Training Attribute Unlearning (PoT-AU), where unlearning can only be
performed after the training of the recommendation model is completed. To
address the PoT-AU problem in recommender systems, we design a two-component
loss function that consists of i) distinguishability loss: making attribute
labels indistinguishable from attackers, and ii) regularization loss:
preventing drastic changes in the model that result in a negative impact on
recommendation performance. Specifically, we investigate two types of
distinguishability measurements, i.e., user-to-user and
distribution-to-distribution. We use the stochastic gradient descent algorithm
to optimize our proposed loss. Extensive experiments on three real-world
datasets demonstrate the effectiveness of our proposed methods
Secure Split Learning against Property Inference, Data Reconstruction, and Feature Space Hijacking Attacks
Split learning of deep neural networks (SplitNN) has provided a promising
solution to learning jointly for the mutual interest of a guest and a host,
which may come from different backgrounds, holding features partitioned
vertically. However, SplitNN creates a new attack surface for the adversarial
participant, holding back its practical use in the real world. By investigating
the adversarial effects of highly threatening attacks, including property
inference, data reconstruction, and feature hijacking attacks, we identify the
underlying vulnerability of SplitNN and propose a countermeasure. To prevent
potential threats and ensure the learning guarantees of SplitNN, we design a
privacy-preserving tunnel for information exchange between the guest and the
host. The intuition is to perturb the propagation of knowledge in each
direction with a controllable unified solution. To this end, we propose a new
activation function named R3eLU, transferring private smashed data and partial
loss into randomized responses in forward and backward propagations,
respectively. We give the first attempt to secure split learning against three
threatening attacks and present a fine-grained privacy budget allocation
scheme. The analysis proves that our privacy-preserving SplitNN solution
provides a tight privacy budget, while the experimental results show that our
solution performs better than existing solutions in most cases and achieves a
good tradeoff between defense and model usability.Comment: 23 page
Facial Data Minimization: Shallow Model as Your Privacy Filter
Face recognition service has been used in many fields and brings much
convenience to people. However, once the user's facial data is transmitted to a
service provider, the user will lose control of his/her private data. In recent
years, there exist various security and privacy issues due to the leakage of
facial data. Although many privacy-preserving methods have been proposed, they
usually fail when they are not accessible to adversaries' strategies or
auxiliary data. Hence, in this paper, by fully considering two cases of
uploading facial images and facial features, which are very typical in face
recognition service systems, we proposed a data privacy minimization
transformation (PMT) method. This method can process the original facial data
based on the shallow model of authorized services to obtain the obfuscated
data. The obfuscated data can not only maintain satisfactory performance on
authorized models and restrict the performance on other unauthorized models but
also prevent original privacy data from leaking by AI methods and human visual
theft. Additionally, since a service provider may execute preprocessing
operations on the received data, we also propose an enhanced perturbation
method to improve the robustness of PMT. Besides, to authorize one facial image
to multiple service models simultaneously, a multiple restriction mechanism is
proposed to improve the scalability of PMT. Finally, we conduct extensive
experiments and evaluate the effectiveness of the proposed PMT in defending
against face reconstruction, data abuse, and face attribute estimation attacks.
These experimental results demonstrate that PMT performs well in preventing
facial data abuse and privacy leakage while maintaining face recognition
accuracy.Comment: 14 pages, 11 figure
- …