3,591 research outputs found
VGAN-Based Image Representation Learning for Privacy-Preserving Facial Expression Recognition
Reliable facial expression recognition plays a critical role in human-machine
interactions. However, most of the facial expression analysis methodologies
proposed to date pay little or no attention to the protection of a user's
privacy. In this paper, we propose a Privacy-Preserving Representation-Learning
Variational Generative Adversarial Network (PPRL-VGAN) to learn an image
representation that is explicitly disentangled from the identity information.
At the same time, this representation is discriminative from the standpoint of
facial expression recognition and generative as it allows expression-equivalent
face image synthesis. We evaluate the proposed model on two public datasets
under various threat scenarios. Quantitative and qualitative results
demonstrate that our approach strikes a balance between the preservation of
privacy and data utility. We further demonstrate that our model can be
effectively applied to other tasks such as expression morphing and image
completion
Adversarial Learning of Privacy-Preserving and Task-Oriented Representations
Data privacy has emerged as an important issue as data-driven deep learning
has been an essential component of modern machine learning systems. For
instance, there could be a potential privacy risk of machine learning systems
via the model inversion attack, whose goal is to reconstruct the input data
from the latent representation of deep networks. Our work aims at learning a
privacy-preserving and task-oriented representation to defend against such
model inversion attacks. Specifically, we propose an adversarial reconstruction
learning framework that prevents the latent representations decoded into
original input data. By simulating the expected behavior of adversary, our
framework is realized by minimizing the negative pixel reconstruction loss or
the negative feature reconstruction (i.e., perceptual distance) loss. We
validate the proposed method on face attribute prediction, showing that our
method allows protecting visual privacy with a small decrease in utility
performance. In addition, we show the utility-privacy trade-off with different
choices of hyperparameter for negative perceptual distance loss at training,
allowing service providers to determine the right level of privacy-protection
with a certain utility performance. Moreover, we provide an extensive study
with different selections of features, tasks, and the data to further analyze
their influence on privacy protection
λ―Όκ° μ 보 μ μ΄λ₯Ό μ΄μ©ν μμ ν μ΄λ―Έμ§ μΈμ½λ© λ³ν
νμλ
Όλ¬Έ(μμ¬) -- μμΈλνκ΅λνμ : 곡과λν μ»΄ν¨ν°κ³΅νλΆ, 2022. 8. μ₯λ³ν.Local Differential Privacy (LDP) is a widely accepted mathematical notion of privacy that guarantees a quantified privacy budget on sensitive data. However, it is difficult to apply LDP algorithms to unstructured data such as images since the fundamental mechanism underlying in many LDP algorithms, Randomized Response (RR), is suited for structured, tabular data. In this paper, we propose a novel task-agnostic LDP framework that preserves the privacy of selected sensitive attributes in an image representation while conserving other visual aspects. Our framework includes an adversarially trained transition model that portrays the RR mechanism, allowing it to be easily utilized in other LDP algorithms. We provide strict description of the problem formulation, and show how our model can prevent attacks from a potential adversary trying to obtain the sensitive information. Our experimental results verify that the proposed framework outperforms baseline models in protecting sensitive attributes with minimal performance loss in arbitrary downstream tasks.μ§μμ μ°¨λ± μ 보 보μ(Local Differntial Privacy, μ΄ν LDP)μ λ리 μλ €μ§ λ³΄μμ λν μλ°ν μνμ μ μλ‘, λ―Όκ°ν λ°μ΄ν°μ κ΄ν΄ μ λνλ κ°λ ₯ν μ 보 보μμ 보μ₯νλ€. νμ§λ§ LDPλ₯Ό μ΄λ£¨λ κ·Όλ³Έμ μΈ λ©μ»€λμ¦μΈ 무μμ μλ΅(Randomized Response, μ΄ν RR)μ ν
μ΄λΈ λ°μ΄ν°μ κ°μ ꡬ쑰νλ λ°μ΄ν°λ₯Όμν΄ λ§λ€μ΄μ‘μΌλ―λ‘ λ리 μλ €μ§ LDP μκ³ λ¦¬μ¦λ€μ μ΄λ―Έμ§μ κ°μ λΉκ΅¬μ‘°νλ λ°μ΄ν°μλ μ μ©νκΈ° μ΄λ ΅λ€λ λ¨μ μ΄ μλ€. λ³Έ μ°κ΅¬μμλ ν΄λΉ λ¨μ μ 보μνκΈ° μν΄ μ΄λ―Έμ§ μΈμ½λ© μμμ λ€λ₯Έ μκ°μ νΉμ§λ€μ μ μ§νλ©΄μ μ νλ λ―Όκ°ν μ 보λ€μ 보μμ μ μ§νλ LDP νλ μμν¬λ₯Ό μ μνλ€. μ μλ νλ μμν¬λ μ λμ νμ΅μ ν΅ν΄ μμ±λ μ μ΄ λͺ¨λΈμ μ΄μ©ν΄ RR λ©μ»€λμ¦μ λͺ¨μ¬ν¨μΌλ‘μ¨ λ€λ₯Έ LDP μκ³ λ¦¬μ¦λ€μλ μ½κ² μ μ©μ΄ κ°λ₯νλ€λ μ₯μ μ΄ μλ€. λ³Έ λ
Όλ¬Έμμλ λ¬Έμ μν©μ μλ°ν μ μνκ³ μ μλ νλ μμν¬κ° λ―Όκ° μ 보λ₯Ό νμ·¨νλ €λ λͺ©μ μ κ°μ§ μ μ¬μ μ λμλ‘λΆν° μ 보λ₯Ό 보νΈν μ μλ€λ κ²μ μ
μ¦νλ€. λν λ³Έ λ
Όλ¬Έμμλ μ€νμ κ²°κ³Όλ₯Ό ν΅ν΄ μ μλ λͺ¨λΈμ΄ λ€λ₯Έ κΈ°μ‘΄ λͺ¨λΈλ€μ λΉν΄ λ°μ΄ν°μ μ μ¬μ μΈ λ―Έλ μμ
λ€μ μ΅λν μν₯μ μ κ² λΌμΉλ©΄μ μ 보λ₯Ό 보νΈν μ μλ€λ κ²μ 보μΈλ€.1 Introduction 1
1.1 Introduction 1
2 Related Works 5
2.1 Privacy-Preserving Machine Learning 5
2.2 Differential Privacy 6
3 Problem Formulation 9
4 Method 11
4.1 Attribute Inference Attack of the Adversary 11
4.1.1 Differential distance learning 13
4.2 Task-agnostic attribute transition model 15
4.2.1 GAN Architecture 15
4.2.2 Distributional Transition Loss 16
4.2.3 Unsensitive attribute preservation 19
4.3 Local Differentially Private Image Representation Transition Framework 20
5 Experimental Results 21
5.1 Experimental Setup 21
5.2 Multi-Label Classification 21
5.2.1 Sensitive attribute transition evaluation 23
5.3 Evaluation on Other Attributes 26
5.4 Qualitative Results 28
5.5 Experiment on CheXpert dataset 31
6 Conclusion 32
Bibliography 33
μ΄ λ‘ 39μ
Privacy Enhanced Multimodal Neural Representations for Emotion Recognition
Many mobile applications and virtual conversational agents now aim to
recognize and adapt to emotions. To enable this, data are transmitted from
users' devices and stored on central servers. Yet, these data contain sensitive
information that could be used by mobile applications without user's consent
or, maliciously, by an eavesdropping adversary. In this work, we show how
multimodal representations trained for a primary task, here emotion
recognition, can unintentionally leak demographic information, which could
override a selected opt-out option by the user. We analyze how this leakage
differs in representations obtained from textual, acoustic, and multimodal
data. We use an adversarial learning paradigm to unlearn the private
information present in a representation and investigate the effect of varying
the strength of the adversarial component on the primary task and on the
privacy metric, defined here as the inability of an attacker to predict
specific demographic information. We evaluate this paradigm on multiple
datasets and show that we can improve the privacy metric while not
significantly impacting the performance on the primary task. To the best of our
knowledge, this is the first work to analyze how the privacy metric differs
across modalities and how multiple privacy concerns can be tackled while still
maintaining performance on emotion recognition.Comment: 8 page
Survey: Leakage and Privacy at Inference Time
Leakage of data from publicly available Machine Learning (ML) models is an
area of growing significance as commercial and government applications of ML
can draw on multiple sources of data, potentially including users' and clients'
sensitive data. We provide a comprehensive survey of contemporary advances on
several fronts, covering involuntary data leakage which is natural to ML
models, potential malevolent leakage which is caused by privacy attacks, and
currently available defence mechanisms. We focus on inference-time leakage, as
the most likely scenario for publicly available models. We first discuss what
leakage is in the context of different data, tasks, and model architectures. We
then propose a taxonomy across involuntary and malevolent leakage, available
defences, followed by the currently available assessment metrics and
applications. We conclude with outstanding challenges and open questions,
outlining some promising directions for future research
- β¦