5 research outputs found
Disguise without Disruption: Utility-Preserving Face De-Identification
With the rise of cameras and smart sensors, humanity generates an exponential
amount of data. This valuable information, including underrepresented cases
like AI in medical settings, can fuel new deep-learning tools. However, data
scientists must prioritize ensuring privacy for individuals in these untapped
datasets, especially for images or videos with faces, which are prime targets
for identification methods. Proposed solutions to de-identify such images often
compromise non-identifying facial attributes relevant to downstream tasks. In
this paper, we introduce Disguise, a novel algorithm that seamlessly
de-identifies facial images while ensuring the usability of the modified data.
Unlike previous approaches, our solution is firmly grounded in the domains of
differential privacy and ensemble-learning research. Our method involves
extracting and substituting depicted identities with synthetic ones, generated
using variational mechanisms to maximize obfuscation and non-invertibility.
Additionally, we leverage supervision from a mixture-of-experts to disentangle
and preserve other utility attributes. We extensively evaluate our method using
multiple datasets, demonstrating a higher de-identification rate and superior
consistency compared to prior approaches in various downstream tasks.Comment: Accepted at AAAI 2024. Paper + supplementary materia
Recommended from our members
Learning to Attack, Protect, and Enhance Deep Networks
Artificial intelligence (AI) systems have demonstrated remarkable capabilities, yet concerns about their security and safe deployment persist. With the rapid adoption of AI across critical domains, ensuring the robustness and reliability of these models is imperative. This researchaddresses this challenge by exposing vulnerabilities in AI systems and enhancing their trustworthiness. By systematically uncovering flaws, it aims to raise awareness of the precautions necessary for
utilizing AI in high-stakes scenarios. The methodology involves identifying vulnerabilities, quantifying worst-case performance via attacks, and generalizing insights to practical deployment settings.
Additionally, it investigates techniques to strengthen model trustworthiness in real-world scenarios,
contributing to rigorous AI safety research that promotes responsible and beneficial system development. Specifically, this research reveals vulnerabilities in neural networks by developing efficient
black-box attacks on various deep learning models across different tasks. Additionally, it focuses
on improving AI trustworthiness by detecting adversarial examples using language models and enhancing user privacy through innovative facial de-identification methods.For highly effective black-box attacks, ensemble-based and context-aware approacheswere developed. These methods optimize over ensemble model weight spaces to craft adversarial
examples with extreme efficiency, significantly outperforming existing input space attacks. Multimodal testing demonstrated that these attacks could fool systems on diverse tasks, highlighting the
need to evaluate deployment robustness against such methods. Additionally, by weaponizing context to manipulate statistical relationships that models rely on, context-aware attacks were shown to
profoundly mislead systems, revealing reasoning vulnerabilities.
To protect user privacy, an algorithm was developed for seamlessly de-identifying facial
images while retaining utility for downstream tasks. This approach, grounded in differential privacy
and ensemble learning, maximizes obfuscation and non-invertibility to prevent re-identification. By
disentangling identity attributes from utility attributes like expressions, the method significantly
enhances de-identification rates while preserving utility.
To enhance the robustness and efficiency of computational imaging pipelines, including
Fourier phase retrieval and coded diffraction imaging, I developed a framework that learns reference
signals or illumination patterns using a small number of training images. This framework employs
an unrolled network as a solver. Once learned, the reference signals or illumination patterns serve
as priors, significantly improving the efficiency of signal reconstruction.
Overall, this research contributes to a more secure and reliable deployment of AI systems,
ensuring their safe and beneficial use across critical domains
Learning to Sense for Coded Diffraction Imaging
In this paper, we present a framework to learn illumination patterns to improve the quality of signal recovery for coded diffraction imaging. We use an alternating minimization-based phase retrieval method with a fixed number of iterations as the iterative method. We represent the iterative phase retrieval method as an unrolled network with a fixed number of layers where each layer of the network corresponds to a single step of iteration, and we minimize the recovery error by optimizing over the illumination patterns. Since the number of iterations/layers is fixed, the recovery has a fixed computational cost. Extensive experimental results on a variety of datasets demonstrate that our proposed method significantly improves the quality of image reconstruction at a fixed computational cost with illumination patterns learned only using a small number of training images