76,470 research outputs found
CryptoMask : Privacy-preserving Face Recognition
Face recognition is a widely-used technique for identification or
verification, where a verifier checks whether a face image matches anyone
stored in a database. However, in scenarios where the database is held by a
third party, such as a cloud server, both parties are concerned about data
privacy. To address this concern, we propose CryptoMask, a privacy-preserving
face recognition system that employs homomorphic encryption (HE) and secure
multi-party computation (MPC). We design a new encoding strategy that leverages
HE properties to reduce communication costs and enable efficient similarity
checks between face images, without expensive homomorphic rotation.
Additionally, CryptoMask leaks less information than existing state-of-the-art
approaches. CryptoMask only reveals whether there is an image matching the
query or not, whereas existing approaches additionally leak sensitive
intermediate distance information. We conduct extensive experiments that
demonstrate CryptoMask's superior performance in terms of computation and
communication. For a database with 100 million 512-dimensional face vectors,
CryptoMask offers and speed-ups
in terms of computation and communication, respectively.Comment: 18 pages,3 figures, accepted by ICICS202
Privacy-Preserving Face Recognition with Learnable Privacy Budgets in Frequency Domain
Face recognition technology has been used in many fields due to its high
recognition accuracy, including the face unlocking of mobile devices, community
access control systems, and city surveillance. As the current high accuracy is
guaranteed by very deep network structures, facial images often need to be
transmitted to third-party servers with high computational power for inference.
However, facial images visually reveal the user's identity information. In this
process, both untrusted service providers and malicious users can significantly
increase the risk of a personal privacy breach. Current privacy-preserving
approaches to face recognition are often accompanied by many side effects, such
as a significant increase in inference time or a noticeable decrease in
recognition accuracy. This paper proposes a privacy-preserving face recognition
method using differential privacy in the frequency domain. Due to the
utilization of differential privacy, it offers a guarantee of privacy in
theory. Meanwhile, the loss of accuracy is very slight. This method first
converts the original image to the frequency domain and removes the direct
component termed DC. Then a privacy budget allocation method can be learned
based on the loss of the back-end face recognition network within the
differential privacy framework. Finally, it adds the corresponding noise to the
frequency domain features. Our method performs very well with several classical
face recognition test sets according to the extensive experiments.Comment: ECCV 2022; Code is available at
https://github.com/Tencent/TFace/tree/master/recognition/tasks/dctd
Retaining Expression on De-identified Faces
© Springer International Publishing AG 2017The extensive use of video surveillance along with advances in face recognition has ignited concerns about the privacy of the people identifiable in the recorded documents. A face de-identification algorithm, named k-Same, has been proposed by prior research and guarantees to thwart face recognition software. However, like many previous attempts in face de-identification, kSame fails to preserve the utility such as gender and expression of the original data. To overcome this, a new algorithm is proposed here to preserve data utility as well as protect privacy. In terms of utility preservation, this new algorithm is capable of preserving not only the category of the facial expression (e.g., happy or sad) but also the intensity of the expression. This new algorithm for face de-identification possesses a great potential especially with real-world images and videos as each facial expression in real life is a continuous motion consisting of images of the same expression with various degrees of intensity.Peer reviewe
Beyond Identity: What Information Is Stored in Biometric Face Templates?
Deeply-learned face representations enable the success of current face
recognition systems. Despite the ability of these representations to encode the
identity of an individual, recent works have shown that more information is
stored within, such as demographics, image characteristics, and social traits.
This threatens the user's privacy, since for many applications these templates
are expected to be solely used for recognition purposes. Knowing the encoded
information in face templates helps to develop bias-mitigating and
privacy-preserving face recognition technologies. This work aims to support the
development of these two branches by analysing face templates regarding 113
attributes. Experiments were conducted on two publicly available face
embeddings. For evaluating the predictability of the attributes, we trained a
massive attribute classifier that is additionally able to accurately state its
prediction confidence. This allows us to make more sophisticated statements
about the attribute predictability. The results demonstrate that up to 74
attributes can be accurately predicted from face templates. Especially
non-permanent attributes, such as age, hairstyles, haircolors, beards, and
various accessories, found to be easily-predictable. Since face recognition
systems aim to be robust against these variations, future research might build
on this work to develop more understandable privacy preserving solutions and
build robust and fair face templates.Comment: To appear in IJCB 202
Adversarial Privacy-preserving Filter
While widely adopted in practical applications, face recognition has been
critically discussed regarding the malicious use of face images and the
potential privacy problems, e.g., deceiving payment system and causing personal
sabotage. Online photo sharing services unintentionally act as the main
repository for malicious crawler and face recognition applications. This work
aims to develop a privacy-preserving solution, called Adversarial
Privacy-preserving Filter (APF), to protect the online shared face images from
being maliciously used.We propose an end-cloud collaborated adversarial attack
solution to satisfy requirements of privacy, utility and nonaccessibility.
Specifically, the solutions consist of three modules: (1) image-specific
gradient generation, to extract image-specific gradient in the user end with a
compressed probe model; (2) adversarial gradient transfer, to fine-tune the
image-specific gradient in the server cloud; and (3) universal adversarial
perturbation enhancement, to append image-independent perturbation to derive
the final adversarial noise. Extensive experiments on three datasets validate
the effectiveness and efficiency of the proposed solution. A prototype
application is also released for further evaluation.We hope the end-cloud
collaborated attack framework could shed light on addressing the issue of
online multimedia sharing privacy-preserving issues from user side.Comment: Accepted by ACM Multimedia 202
Privacy-Preserving Face Recognition Using Random Frequency Components
The ubiquitous use of face recognition has sparked increasing privacy
concerns, as unauthorized access to sensitive face images could compromise the
information of individuals. This paper presents an in-depth study of the
privacy protection of face images' visual information and against recovery.
Drawing on the perceptual disparity between humans and models, we propose to
conceal visual information by pruning human-perceivable low-frequency
components. For impeding recovery, we first elucidate the seeming paradox
between reducing model-exploitable information and retaining high recognition
accuracy. Based on recent theoretical insights and our observation on model
attention, we propose a solution to the dilemma, by advocating for the training
and inference of recognition models on randomly selected frequency components.
We distill our findings into a novel privacy-preserving face recognition
method, PartialFace. Extensive experiments demonstrate that PartialFace
effectively balances privacy protection goals and recognition accuracy. Code is
available at: https://github.com/Tencent/TFace.Comment: ICCV 202
VGAN-Based Image Representation Learning for Privacy-Preserving Facial Expression Recognition
Reliable facial expression recognition plays a critical role in human-machine
interactions. However, most of the facial expression analysis methodologies
proposed to date pay little or no attention to the protection of a user's
privacy. In this paper, we propose a Privacy-Preserving Representation-Learning
Variational Generative Adversarial Network (PPRL-VGAN) to learn an image
representation that is explicitly disentangled from the identity information.
At the same time, this representation is discriminative from the standpoint of
facial expression recognition and generative as it allows expression-equivalent
face image synthesis. We evaluate the proposed model on two public datasets
under various threat scenarios. Quantitative and qualitative results
demonstrate that our approach strikes a balance between the preservation of
privacy and data utility. We further demonstrate that our model can be
effectively applied to other tasks such as expression morphing and image
completion
DuetFace: Collaborative Privacy-Preserving Face Recognition via Channel Splitting in the Frequency Domain
With the wide application of face recognition systems, there is rising
concern that original face images could be exposed to malicious intents and
consequently cause personal privacy breaches. This paper presents DuetFace, a
novel privacy-preserving face recognition method that employs collaborative
inference in the frequency domain. Starting from a counterintuitive discovery
that face recognition can achieve surprisingly good performance with only
visually indistinguishable high-frequency channels, this method designs a
credible split of frequency channels by their cruciality for visualization and
operates the server-side model on non-crucial channels. However, the model
degrades in its attention to facial features due to the missing visual
information. To compensate, the method introduces a plug-in interactive block
to allow attention transfer from the client-side by producing a feature mask.
The mask is further refined by deriving and overlaying a facial region of
interest (ROI). Extensive experiments on multiple datasets validate the
effectiveness of the proposed method in protecting face images from undesired
visual inspection, reconstruction, and identification while maintaining high
task availability and performance. Results show that the proposed method
achieves a comparable recognition accuracy and computation cost to the
unprotected ArcFace and outperforms the state-of-the-art privacy-preserving
methods. The source code is available at
https://github.com/Tencent/TFace/tree/master/recognition/tasks/duetface.Comment: Accepted to ACM Multimedia 202
- …