9,924 research outputs found

    Privacy-Preserving Distributed Optimization via Subspace Perturbation: A General Framework

    Get PDF
    As the modern world becomes increasingly digitized and interconnected, distributed signal processing has proven to be effective in processing its large volume of data. However, a main challenge limiting the broad use of distributed signal processing techniques is the issue of privacy in handling sensitive data. To address this privacy issue, we propose a novel yet general subspace perturbation method for privacy-preserving distributed optimization, which allows each node to obtain the desired solution while protecting its private data. In particular, we show that the dual variables introduced in each distributed optimizer will not converge in a certain subspace determined by the graph topology. Additionally, the optimization variable is ensured to converge to the desired solution, because it is orthogonal to this non-convergent subspace. We therefore propose to insert noise in the non-convergent subspace through the dual variable such that the private data are protected, and the accuracy of the desired solution is completely unaffected. Moreover, the proposed method is shown to be secure under two widely-used adversary models: passive and eavesdropping. Furthermore, we consider several distributed optimizers such as ADMM and PDMM to demonstrate the general applicability of the proposed method. Finally, we test the performance through a set of applications. Numerical tests indicate that the proposed method is superior to existing methods in terms of several parameters like estimated accuracy, privacy level, communication cost and convergence rate

    Privacy-Preserving Face Recognition with Learnable Privacy Budgets in Frequency Domain

    Full text link
    Face recognition technology has been used in many fields due to its high recognition accuracy, including the face unlocking of mobile devices, community access control systems, and city surveillance. As the current high accuracy is guaranteed by very deep network structures, facial images often need to be transmitted to third-party servers with high computational power for inference. However, facial images visually reveal the user's identity information. In this process, both untrusted service providers and malicious users can significantly increase the risk of a personal privacy breach. Current privacy-preserving approaches to face recognition are often accompanied by many side effects, such as a significant increase in inference time or a noticeable decrease in recognition accuracy. This paper proposes a privacy-preserving face recognition method using differential privacy in the frequency domain. Due to the utilization of differential privacy, it offers a guarantee of privacy in theory. Meanwhile, the loss of accuracy is very slight. This method first converts the original image to the frequency domain and removes the direct component termed DC. Then a privacy budget allocation method can be learned based on the loss of the back-end face recognition network within the differential privacy framework. Finally, it adds the corresponding noise to the frequency domain features. Our method performs very well with several classical face recognition test sets according to the extensive experiments.Comment: ECCV 2022; Code is available at https://github.com/Tencent/TFace/tree/master/recognition/tasks/dctd

    Preserving Privacy In Image Database Through Bit-planes Obfuscation

    Get PDF
    The recent surge in computer vision applications has caused visual privacy concerns to people who are either users or exposed to an underlying surveillance system. To preserve their privacy, image obfuscation lays out a strong road through which the usability of images can also be maintained without revealing any visual private information. However, prior solutions are susceptible to reconstruction attacks or produce non-trainable images even by leveraging the obfuscation ways. This paper proposes a novel bit-planes-based image obfuscation scheme, called Bimof, to protect the visual privacy of the user in the images that are input into a recognition-based system. By incorporating the chaotic system for non-invertible noise with matrix decomposition, Bimof offers strong security and usability for creating a secure image database. In Bimof, it is hard for an adversary to recover the original image, withstanding a malicious server. We conduct experiments on two standard activity recognition datasets, UCF101 and HMDB51, to validate the effectiveness and usability of our scheme. We provide a rigorous quantitative security analysis through pixel frequency attacks and differential analysis to support our findings
    • …
    corecore