3,591 research outputs found

    VGAN-Based Image Representation Learning for Privacy-Preserving Facial Expression Recognition

    Full text link
    Reliable facial expression recognition plays a critical role in human-machine interactions. However, most of the facial expression analysis methodologies proposed to date pay little or no attention to the protection of a user's privacy. In this paper, we propose a Privacy-Preserving Representation-Learning Variational Generative Adversarial Network (PPRL-VGAN) to learn an image representation that is explicitly disentangled from the identity information. At the same time, this representation is discriminative from the standpoint of facial expression recognition and generative as it allows expression-equivalent face image synthesis. We evaluate the proposed model on two public datasets under various threat scenarios. Quantitative and qualitative results demonstrate that our approach strikes a balance between the preservation of privacy and data utility. We further demonstrate that our model can be effectively applied to other tasks such as expression morphing and image completion

    Adversarial Learning of Privacy-Preserving and Task-Oriented Representations

    Full text link
    Data privacy has emerged as an important issue as data-driven deep learning has been an essential component of modern machine learning systems. For instance, there could be a potential privacy risk of machine learning systems via the model inversion attack, whose goal is to reconstruct the input data from the latent representation of deep networks. Our work aims at learning a privacy-preserving and task-oriented representation to defend against such model inversion attacks. Specifically, we propose an adversarial reconstruction learning framework that prevents the latent representations decoded into original input data. By simulating the expected behavior of adversary, our framework is realized by minimizing the negative pixel reconstruction loss or the negative feature reconstruction (i.e., perceptual distance) loss. We validate the proposed method on face attribute prediction, showing that our method allows protecting visual privacy with a small decrease in utility performance. In addition, we show the utility-privacy trade-off with different choices of hyperparameter for negative perceptual distance loss at training, allowing service providers to determine the right level of privacy-protection with a certain utility performance. Moreover, we provide an extensive study with different selections of features, tasks, and the data to further analyze their influence on privacy protection

    민감 정보 전이λ₯Ό μ΄μš©ν•œ μ•ˆμ „ν•œ 이미지 인코딩 λ³€ν™˜

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(석사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 컴퓨터곡학뢀, 2022. 8. μž₯병탁.Local Differential Privacy (LDP) is a widely accepted mathematical notion of privacy that guarantees a quantified privacy budget on sensitive data. However, it is difficult to apply LDP algorithms to unstructured data such as images since the fundamental mechanism underlying in many LDP algorithms, Randomized Response (RR), is suited for structured, tabular data. In this paper, we propose a novel task-agnostic LDP framework that preserves the privacy of selected sensitive attributes in an image representation while conserving other visual aspects. Our framework includes an adversarially trained transition model that portrays the RR mechanism, allowing it to be easily utilized in other LDP algorithms. We provide strict description of the problem formulation, and show how our model can prevent attacks from a potential adversary trying to obtain the sensitive information. Our experimental results verify that the proposed framework outperforms baseline models in protecting sensitive attributes with minimal performance loss in arbitrary downstream tasks.지역적 μ°¨λ“± 정보 λ³΄μ•ˆ(Local Differntial Privacy, μ΄ν•˜ LDP)은 널리 μ•Œλ €μ§„ λ³΄μ•ˆμ— λŒ€ν•œ μ—„λ°€ν•œ μˆ˜ν•™μ  μ •μ˜λ‘œ, λ―Όκ°ν•œ 데이터에 κ΄€ν•΄ μ •λŸ‰ν™”λœ κ°•λ ₯ν•œ 정보 λ³΄μ•ˆμ„ 보μž₯ν•œλ‹€. ν•˜μ§€λ§Œ LDPλ₯Ό μ΄λ£¨λŠ” 근본적인 λ©”μ»€λ‹ˆμ¦˜μΈ λ¬΄μž‘μœ„ 응닡(Randomized Response, μ΄ν•˜ RR)은 ν…Œμ΄λΈ” 데이터와 같은 κ΅¬μ‘°ν™”λœ 데이터λ₯Όμœ„ν•΄ λ§Œλ“€μ–΄μ‘ŒμœΌλ―€λ‘œ 널리 μ•Œλ €μ§„ LDP μ•Œκ³ λ¦¬μ¦˜λ“€μ€ 이미지와 같은 λΉ„κ΅¬μ‘°ν™”λœ λ°μ΄ν„°μ—λŠ” μ μš©ν•˜κΈ° μ–΄λ ΅λ‹€λŠ” 단점이 μžˆλ‹€. λ³Έ μ—°κ΅¬μ—μ„œλŠ” ν•΄λ‹Ή 단점을 λ³΄μ™„ν•˜κΈ° μœ„ν•΄ 이미지 인코딩 μƒμ—μ„œ λ‹€λ₯Έ μ‹œκ°μ  νŠΉμ§•λ“€μ€ μœ μ§€ν•˜λ©΄μ„œ μ„ νƒλœ λ―Όκ°ν•œ μ •λ³΄λ“€μ˜ λ³΄μ•ˆμ„ μœ μ§€ν•˜λŠ” LDP ν”„λ ˆμž„μ›Œν¬λ₯Ό μ œμ•ˆν•œλ‹€. μ œμ•ˆλœ ν”„λ ˆμž„μ›Œν¬λŠ” μ λŒ€μ  ν•™μŠ΅μ„ 톡해 μƒμ„±λœ 전이 λͺ¨λΈμ„ μ΄μš©ν•΄ RR λ©”μ»€λ‹ˆμ¦˜μ„ λͺ¨μ‚¬ν•¨μœΌλ‘œμ¨ λ‹€λ₯Έ LDP μ•Œκ³ λ¦¬μ¦˜λ“€μ—λ„ μ‰½κ²Œ 적용이 κ°€λŠ₯ν•˜λ‹€λŠ” μž₯점이 μžˆλ‹€. λ³Έ λ…Όλ¬Έμ—μ„œλŠ” 문제 상황을 μ—„λ°€νžˆ μ •μ˜ν•˜κ³  μ œμ•ˆλœ ν”„λ ˆμž„μ›Œν¬κ°€ 민감 정보λ₯Ό νƒˆμ·¨ν•˜λ €λŠ” λͺ©μ μ„ 가진 잠재적 μ λŒ€μžλ‘œλΆ€ν„° 정보λ₯Ό λ³΄ν˜Έν•  수 μžˆλ‹€λŠ” 것을 μž…μ¦ν•œλ‹€. λ˜ν•œ λ³Έ λ…Όλ¬Έμ—μ„œλŠ” μ‹€ν—˜μ  κ²°κ³Όλ₯Ό 톡해 μ œμ•ˆλœ λͺ¨λΈμ΄ λ‹€λ₯Έ κΈ°μ‘΄ λͺ¨λΈλ“€μ— λΉ„ν•΄ λ°μ΄ν„°μ˜ 잠재적인 미래 μž‘μ—…λ“€μ— μ΅œλŒ€ν•œ 영ν–₯을 적게 λΌμΉ˜λ©΄μ„œ 정보λ₯Ό λ³΄ν˜Έν•  수 μžˆλ‹€λŠ” 것을 보인닀.1 Introduction 1 1.1 Introduction 1 2 Related Works 5 2.1 Privacy-Preserving Machine Learning 5 2.2 Differential Privacy 6 3 Problem Formulation 9 4 Method 11 4.1 Attribute Inference Attack of the Adversary 11 4.1.1 Differential distance learning 13 4.2 Task-agnostic attribute transition model 15 4.2.1 GAN Architecture 15 4.2.2 Distributional Transition Loss 16 4.2.3 Unsensitive attribute preservation 19 4.3 Local Differentially Private Image Representation Transition Framework 20 5 Experimental Results 21 5.1 Experimental Setup 21 5.2 Multi-Label Classification 21 5.2.1 Sensitive attribute transition evaluation 23 5.3 Evaluation on Other Attributes 26 5.4 Qualitative Results 28 5.5 Experiment on CheXpert dataset 31 6 Conclusion 32 Bibliography 33 초 둝 39석

    Privacy Enhanced Multimodal Neural Representations for Emotion Recognition

    Full text link
    Many mobile applications and virtual conversational agents now aim to recognize and adapt to emotions. To enable this, data are transmitted from users' devices and stored on central servers. Yet, these data contain sensitive information that could be used by mobile applications without user's consent or, maliciously, by an eavesdropping adversary. In this work, we show how multimodal representations trained for a primary task, here emotion recognition, can unintentionally leak demographic information, which could override a selected opt-out option by the user. We analyze how this leakage differs in representations obtained from textual, acoustic, and multimodal data. We use an adversarial learning paradigm to unlearn the private information present in a representation and investigate the effect of varying the strength of the adversarial component on the primary task and on the privacy metric, defined here as the inability of an attacker to predict specific demographic information. We evaluate this paradigm on multiple datasets and show that we can improve the privacy metric while not significantly impacting the performance on the primary task. To the best of our knowledge, this is the first work to analyze how the privacy metric differs across modalities and how multiple privacy concerns can be tackled while still maintaining performance on emotion recognition.Comment: 8 page

    Survey: Leakage and Privacy at Inference Time

    Get PDF
    Leakage of data from publicly available Machine Learning (ML) models is an area of growing significance as commercial and government applications of ML can draw on multiple sources of data, potentially including users' and clients' sensitive data. We provide a comprehensive survey of contemporary advances on several fronts, covering involuntary data leakage which is natural to ML models, potential malevolent leakage which is caused by privacy attacks, and currently available defence mechanisms. We focus on inference-time leakage, as the most likely scenario for publicly available models. We first discuss what leakage is in the context of different data, tasks, and model architectures. We then propose a taxonomy across involuntary and malevolent leakage, available defences, followed by the currently available assessment metrics and applications. We conclude with outstanding challenges and open questions, outlining some promising directions for future research
    • …
    corecore