5 research outputs found

    Learning Privacy Preserving Encodings through Adversarial Training

    Full text link
    We present a framework to learn privacy-preserving encodings of images that inhibit inference of chosen private attributes, while allowing recovery of other desirable information. Rather than simply inhibiting a given fixed pre-trained estimator, our goal is that an estimator be unable to learn to accurately predict the private attributes even with knowledge of the encoding function. We use a natural adversarial optimization-based formulation for this---training the encoding function against a classifier for the private attribute, with both modeled as deep neural networks. The key contribution of our work is a stable and convergent optimization approach that is successful at learning an encoder with our desired properties---maintaining utility while inhibiting inference of private attributes, not just within the adversarial optimization, but also by classifiers that are trained after the encoder is fixed. We adopt a rigorous experimental protocol for verification wherein classifiers are trained exhaustively till saturation on the fixed encoders. We evaluate our approach on tasks of real-world complexity---learning high-dimensional encodings that inhibit detection of different scene categories---and find that it yields encoders that are resilient at maintaining privacy.Comment: To appear in WACV 201

    Face detection hindering

    Get PDF

    Achieving Anonymity Against Major Face Recognition Algorithms

    Get PDF
    An ever-increasing number of personal photos is stored online. Thistrendcanbeproblematic, becausefacerecognition software can undermine user privacy in unexpected ways. Face de-identification aims to prevent automatic recognition of faces thus improving user privacy, but previous work alters the image in a way that makes them indistinguishable for both computers and humans, which prevents a widespread use. We propose a method for de-identification of images that effectivelypreventsface recognition software (usingthemost popular and effective algorithms) from identifying people, but still allows human recognition. We evaluate our method experimentally by adapting the CSU framework and using the FERET database. We show that we are able to achieve strong de-identification while maintaining reasonable image quality. 1

    Facial Privacy Protection in Airborne Recreational Videography

    Get PDF
    PhDCameras mounted on Micro Aerial Vehicles (MAVs) are increasingly used for recreational photography and videography. However, aerial photographs and videographs of public places often contain faces of bystanders thus leading to a perceived or actual violation of privacy. To address this issue, this thesis presents a novel privacy lter that adaptively blurs sensitive image regions and is robust against di erent privacy attacks. In particular, the thesis aims to impede face recognition from airborne cameras and explores the design space to determine when a face in an airborne image is inherently protected, that is when an individual is not recognisable. When individuals are recognisable by facial recognition algorithms, an adaptive ltering mechanism is proposed to lower the face resolution in order to preserve privacy while ensuring a minimum reduction of the delity of the image. Moreover, the lter's parameters are pseudo-randomly changed to make the applied protection robust against di erent privacy attacks. In case of videography, the lter is updated with a motion-dependent temporal smoothing to minimise icker introduced by the pseudo-random switching of the lter's parameters, without compromising on its robustness against di erent privacy attacks. To evaluate the e ciency of the proposed lter, the thesis uses a state-of-the-art face recognition algorithm and synthetically generated face data with 3D geometric image transformations that mimic faces captured from an MAV at di erent heights and pitch angles. For the videography scenario, a small video face data set is rst captured and then the proposed lter is evaluated against di erent privacy attacks and the quality of the resulting video using both objective measures and a subjective test.This work was supported in part by the research initiative Intelligent Vision Austria with funding from the Austrian Federal Ministry of Science, Research and Economy and the Austrian Institute of Technology
    corecore