189,792 research outputs found
Privacy-Preserving Face Recognition Using Random Frequency Components
The ubiquitous use of face recognition has sparked increasing privacy
concerns, as unauthorized access to sensitive face images could compromise the
information of individuals. This paper presents an in-depth study of the
privacy protection of face images' visual information and against recovery.
Drawing on the perceptual disparity between humans and models, we propose to
conceal visual information by pruning human-perceivable low-frequency
components. For impeding recovery, we first elucidate the seeming paradox
between reducing model-exploitable information and retaining high recognition
accuracy. Based on recent theoretical insights and our observation on model
attention, we propose a solution to the dilemma, by advocating for the training
and inference of recognition models on randomly selected frequency components.
We distill our findings into a novel privacy-preserving face recognition
method, PartialFace. Extensive experiments demonstrate that PartialFace
effectively balances privacy protection goals and recognition accuracy. Code is
available at: https://github.com/Tencent/TFace.Comment: ICCV 202
Template Protection For 3D Face Recognition
The human face is one of the most important biometric modalities for automatic authentication. Three-dimensional face recognition exploits facial surface information. In comparison to illumination based 2D face recognition, it has good robustness and high fake resistance, so that it can be used in high security areas. Nevertheless, as in other common biometric systems, potential risks of identity theft, cross matching and exposure of privacy information threaten the security of the authentication system as well as the user\\u27s privacy. As a crucial supplementary of biometrics, the template protection technique can prevent security leakages and protect privacy. In this chapter, we show security leakages in common biometric systems and give a detailed introduction on template protection techniques. Then the latest results of template protection techniques in 3D face recognition systems are presented. The recognition performances as well as the security gains are analyzed
Privacy Protection Performance of De-identified Face Images with and without Background
Li Meng, 'Privacy Protection Performance of De-identified Face Images with and without Background', paper presented at the 39th International Information and Communication Technology (ICT) Convention. Grand Hotel Adriatic Congress Centre and Admiral Hotel, Opatija, Croatia, May 30 - June 3, 2016.This paper presents an approach to blending a de-identified face region with its original background, for the purpose of completing the process of face de-identification. The re-identification risk of the de-identified FERET face images has been evaluated for the k-Diff-furthest face de-identification method, using several face recognition benchmark methods including PCA, LBP, HOG and LPQ. The experimental results show that the k-Diff-furthest face de-identification delivers high privacy protection within the face region while blending the de-identified face region with its original background may significantly increases the re-identification risk, indicating that de-identification must also be applied to image areas beyond the face region
VGAN-Based Image Representation Learning for Privacy-Preserving Facial Expression Recognition
Reliable facial expression recognition plays a critical role in human-machine
interactions. However, most of the facial expression analysis methodologies
proposed to date pay little or no attention to the protection of a user's
privacy. In this paper, we propose a Privacy-Preserving Representation-Learning
Variational Generative Adversarial Network (PPRL-VGAN) to learn an image
representation that is explicitly disentangled from the identity information.
At the same time, this representation is discriminative from the standpoint of
facial expression recognition and generative as it allows expression-equivalent
face image synthesis. We evaluate the proposed model on two public datasets
under various threat scenarios. Quantitative and qualitative results
demonstrate that our approach strikes a balance between the preservation of
privacy and data utility. We further demonstrate that our model can be
effectively applied to other tasks such as expression morphing and image
completion
A Study of Face Obfuscation in ImageNet
Face obfuscation (blurring, mosaicing, etc.) has been shown to be effective
for privacy protection; nevertheless, object recognition research typically
assumes access to complete, unobfuscated images. In this paper, we explore the
effects of face obfuscation on the popular ImageNet challenge visual
recognition benchmark. Most categories in the ImageNet challenge are not people
categories; however, many incidental people appear in the images, and their
privacy is a concern. We first annotate faces in the dataset. Then we
demonstrate that face obfuscation has minimal impact on the accuracy of
recognition models. Concretely, we benchmark multiple deep neural networks on
obfuscated images and observe that the overall recognition accuracy drops only
slightly (<= 1.0%). Further, we experiment with transfer learning to 4
downstream tasks (object recognition, scene recognition, face attribute
classification, and object detection) and show that features learned on
obfuscated images are equally transferable. Our work demonstrates the
feasibility of privacy-aware visual recognition, improves the highly-used
ImageNet challenge benchmark, and suggests an important path for future visual
datasets. Data and code are available at
https://github.com/princetonvisualai/imagenet-face-obfuscation.Comment: Accepted to ICML 202
CLIP2Protect: Protecting Facial Privacy using Text-Guided Makeup via Adversarial Latent Search
The success of deep learning based face recognition systems has given rise to
serious privacy concerns due to their ability to enable unauthorized tracking
of users in the digital world. Existing methods for enhancing privacy fail to
generate naturalistic images that can protect facial privacy without
compromising user experience. We propose a novel two-step approach for facial
privacy protection that relies on finding adversarial latent codes in the
low-dimensional manifold of a pretrained generative model. The first step
inverts the given face image into the latent space and finetunes the generative
model to achieve an accurate reconstruction of the given image from its latent
code. This step produces a good initialization, aiding the generation of
high-quality faces that resemble the given identity. Subsequently, user-defined
makeup text prompts and identity-preserving regularization are used to guide
the search for adversarial codes in the latent space. Extensive experiments
demonstrate that faces generated by our approach have stronger black-box
transferability with an absolute gain of 12.06% over the state-of-the-art
facial privacy protection approach under the face verification task. Finally,
we demonstrate the effectiveness of the proposed approach for commercial face
recognition systems. Our code is available at
https://github.com/fahadshamshad/Clip2Protect.Comment: Accepted in CVPR 2023. Project page:
https://fahadshamshad.github.io/Clip2Protect
- …