40 research outputs found
Facial Data Minimization: Shallow Model as Your Privacy Filter
Face recognition service has been used in many fields and brings much
convenience to people. However, once the user's facial data is transmitted to a
service provider, the user will lose control of his/her private data. In recent
years, there exist various security and privacy issues due to the leakage of
facial data. Although many privacy-preserving methods have been proposed, they
usually fail when they are not accessible to adversaries' strategies or
auxiliary data. Hence, in this paper, by fully considering two cases of
uploading facial images and facial features, which are very typical in face
recognition service systems, we proposed a data privacy minimization
transformation (PMT) method. This method can process the original facial data
based on the shallow model of authorized services to obtain the obfuscated
data. The obfuscated data can not only maintain satisfactory performance on
authorized models and restrict the performance on other unauthorized models but
also prevent original privacy data from leaking by AI methods and human visual
theft. Additionally, since a service provider may execute preprocessing
operations on the received data, we also propose an enhanced perturbation
method to improve the robustness of PMT. Besides, to authorize one facial image
to multiple service models simultaneously, a multiple restriction mechanism is
proposed to improve the scalability of PMT. Finally, we conduct extensive
experiments and evaluate the effectiveness of the proposed PMT in defending
against face reconstruction, data abuse, and face attribute estimation attacks.
These experimental results demonstrate that PMT performs well in preventing
facial data abuse and privacy leakage while maintaining face recognition
accuracy.Comment: 14 pages, 11 figure
Privacy-Protecting Techniques for Behavioral Data: A Survey
Our behavior (the way we talk, walk, or think) is unique and can be used as a biometric trait. It also correlates with sensitive attributes like emotions. Hence, techniques to protect individuals privacy against unwanted inferences are required. To consolidate knowledge in this area, we systematically reviewed applicable anonymization techniques. We taxonomize and compare existing solutions regarding privacy goals, conceptual operation, advantages, and limitations. Our analysis shows that some behavioral traits (e.g., voice) have received much attention, while others (e.g., eye-gaze, brainwaves) are mostly neglected. We also find that the evaluation methodology of behavioral anonymization techniques can be further improved
Adventures of Trustworthy Vision-Language Models: A Survey
Recently, transformers have become incredibly popular in computer vision and
vision-language tasks. This notable rise in their usage can be primarily
attributed to the capabilities offered by attention mechanisms and the
outstanding ability of transformers to adapt and apply themselves to a variety
of tasks and domains. Their versatility and state-of-the-art performance have
established them as indispensable tools for a wide array of applications.
However, in the constantly changing landscape of machine learning, the
assurance of the trustworthiness of transformers holds utmost importance. This
paper conducts a thorough examination of vision-language transformers,
employing three fundamental principles of responsible AI: Bias, Robustness, and
Interpretability. The primary objective of this paper is to delve into the
intricacies and complexities associated with the practical use of transformers,
with the overarching goal of advancing our comprehension of how to enhance
their reliability and accountability.Comment: Accepted in AAAI 202