771 research outputs found
InfoScrub: Towards Attribute Privacy by Targeted Obfuscation
Personal photos of individuals when shared online, apart from exhibiting a
myriad of memorable details, also reveals a wide range of private information
and potentially entails privacy risks (e.g., online harassment, tracking). To
mitigate such risks, it is crucial to study techniques that allow individuals
to limit the private information leaked in visual data. We tackle this problem
in a novel image obfuscation framework: to maximize entropy on inferences over
targeted privacy attributes, while retaining image fidelity. We approach the
problem based on an encoder-decoder style architecture, with two key novelties:
(a) introducing a discriminator to perform bi-directional translation
simultaneously from multiple unpaired domains; (b) predicting an image
interpolation which maximizes uncertainty over a target set of attributes. We
find our approach generates obfuscated images faithful to the original input
images, and additionally increase uncertainty by 6.2 (or up to 0.85
bits) over the non-obfuscated counterparts.Comment: 20 pages, 7 figure
Polarimetric Thermal to Visible Face Verification via Self-Attention Guided Synthesis
Polarimetric thermal to visible face verification entails matching two images
that contain significant domain differences. Several recent approaches have
attempted to synthesize visible faces from thermal images for cross-modal
matching. In this paper, we take a different approach in which rather than
focusing only on synthesizing visible faces from thermal faces, we also propose
to synthesize thermal faces from visible faces. Our intuition is based on the
fact that thermal images also contain some discriminative information about the
person for verification. Deep features from a pre-trained Convolutional Neural
Network (CNN) are extracted from the original as well as the synthesized
images. These features are then fused to generate a template which is then used
for verification. The proposed synthesis network is based on the self-attention
generative adversarial network (SAGAN) which essentially allows efficient
attention-guided image synthesis. Extensive experiments on the ARL polarimetric
thermal face dataset demonstrate that the proposed method achieves
state-of-the-art performance.Comment: This work is accepted at the 12th IAPR International Conference On
Biometrics (ICB 2019
Inverting the generator of a generative adversarial network
Generative adversarial networks (GANs) learn a deep generative model that is able to synthesize novel, high-dimensional data samples. New data samples are synthesized by passing latent samples, drawn from a chosen prior distribution, through the generative model. Once trained, the latent space exhibits interesting properties that may be useful for downstream tasks such as classification or retrieval. Unfortunately, GANs do not offer an ``inverse model,'' a mapping from data space back to latent space, making it difficult to infer a latent representation for a given data sample. In this paper, we introduce a technique, inversion, to project data samples, specifically images, to the latent space using a pretrained GAN. Using our proposed inversion technique, we are able to identify which attributes of a data set a trained GAN is able to model and quantify GAN performance, based on a reconstruction loss. We demonstrate how our proposed inversion technique may be used to quantitatively compare the performance of various GAN models trained on three image data sets. We provide codes for all of our experiments in the website (https://github.com/ToniCreswell/InvertingGAN)
Diversified Texture Synthesis with Feed-forward Networks
Recent progresses on deep discriminative and generative modeling have shown
promising results on texture synthesis. However, existing feed-forward based
methods trade off generality for efficiency, which suffer from many issues,
such as shortage of generality (i.e., build one network per texture), lack of
diversity (i.e., always produce visually identical output) and suboptimality
(i.e., generate less satisfying visual effects). In this work, we focus on
solving these issues for improved texture synthesis. We propose a deep
generative feed-forward network which enables efficient synthesis of multiple
textures within one single network and meaningful interpolation between them.
Meanwhile, a suite of important techniques are introduced to achieve better
convergence and diversity. With extensive experiments, we demonstrate the
effectiveness of the proposed model and techniques for synthesizing a large
number of textures and show its applications with the stylization.Comment: accepted by CVPR201
- …