The ubiquitous use of face recognition has sparked increasing privacy
concerns, as unauthorized access to sensitive face images could compromise the
information of individuals. This paper presents an in-depth study of the
privacy protection of face images' visual information and against recovery.
Drawing on the perceptual disparity between humans and models, we propose to
conceal visual information by pruning human-perceivable low-frequency
components. For impeding recovery, we first elucidate the seeming paradox
between reducing model-exploitable information and retaining high recognition
accuracy. Based on recent theoretical insights and our observation on model
attention, we propose a solution to the dilemma, by advocating for the training
and inference of recognition models on randomly selected frequency components.
We distill our findings into a novel privacy-preserving face recognition
method, PartialFace. Extensive experiments demonstrate that PartialFace
effectively balances privacy protection goals and recognition accuracy. Code is
available at: https://github.com/Tencent/TFace.Comment: ICCV 202