13 research outputs found

    Generating Private Data Surrogates for Vision Related Tasks

    Get PDF
    International audienceWith the widespread application of deep networks in industry, membership inference attacks, i.e. the ability to discern training data from a model, become more and more problematic for data privacy. Recent work suggests that generative networks may be robust against membership attacks. In this work, we build on this observation, offering a general-purpose solution to the membership privacy problem. As the primary contribution, we demonstrate how to construct surrogate datasets, using images from GAN generators, labelled with a classifier trained on the private dataset. Next, we show this surrogate data can further be used for a variety of downstream tasks (here classification and regression), while being resistant to membership attacks. We study a variety of different GANs proposed in the literature, concluding that higher quality GANs result in better surrogate data with respect to the task at hand

    How does Lipschitz regularization influence GAN training?

    Get PDF
    Despite the success of Lipschitz regularization in stabilizingGAN training, the exact reason of its effectiveness remains poorly un-derstood. The direct effect ofK-Lipschitz regularization is to restrict theL2-norm of the neural network gradient to be smaller than a thresholdK(e.g.,K= 1) such that‖∇f‖≤K. In this work, we uncover an evenmore important effect of Lipschitz regularization by examining its im-pact on the loss function:It degenerates GAN loss functions to almostlinear ones by restricting their domain and interval of attainable gradi-ent values. Our analysis shows that loss functions are only successful ifthey are degenerated to almost linear ones. We also show that loss func-tions perform poorly if they are not degenerated and that a wide rangeof functions can be used as loss function as long as they are sufficientlydegenerated by regularization. Basically, Lipschitz regularization ensuresthat all loss functionseffectively work in the same way.Empirically, weverify our proposition on the MNIST, CIFAR10 and CelebA datasets

    Feature Likelihood Score: Evaluating Generalization of Generative Models Using Samples

    Full text link
    The past few years have seen impressive progress in the development of deep generative models capable of producing high-dimensional, complex, and photo-realistic data. However, current methods for evaluating such models remain incomplete: standard likelihood-based metrics do not always apply and rarely correlate with perceptual fidelity, while sample-based metrics, such as FID, are insensitive to overfitting, i.e., inability to generalize beyond the training set. To address these limitations, we propose a new metric called the Feature Likelihood Score (FLS), a parametric sample-based score that uses density estimation to provide a comprehensive trichotomic evaluation accounting for novelty (i.e., different from the training samples), fidelity, and diversity of generated samples. We empirically demonstrate the ability of FLS to identify specific overfitting problem cases, where previously proposed metrics fail. We also extensively evaluate FLS on various image datasets and model classes, demonstrating its ability to match intuitions of previous metrics like FID while offering a more comprehensive evaluation of generative models

    Stable Rank Normalization for Improved Generalization in Neural Networks and GANs

    Full text link
    Exciting new work on the generalization bounds for neural networks (NN) given by Neyshabur et al. , Bartlett et al. closely depend on two parameter-depenedent quantities: the Lipschitz constant upper-bound and the stable rank (a softer version of the rank operator). This leads to an interesting question of whether controlling these quantities might improve the generalization behaviour of NNs. To this end, we propose stable rank normalization (SRN), a novel, optimal, and computationally efficient weight-normalization scheme which minimizes the stable rank of a linear operator. Surprisingly we find that SRN, inspite of being non-convex problem, can be shown to have a unique optimal solution. Moreover, we show that SRN allows control of the data-dependent empirical Lipschitz constant, which in contrast to the Lipschitz upper-bound, reflects the true behaviour of a model on a given dataset. We provide thorough analyses to show that SRN, when applied to the linear layers of a NN for classification, provides striking improvements-11.3% on the generalization gap compared to the standard NN along with significant reduction in memorization. When applied to the discriminator of GANs (called SRN-GAN) it improves Inception, FID, and Neural divergence scores on the CIFAR 10/100 and CelebA datasets, while learning mappings with low empirical Lipschitz constants.Comment: Accepted at the International Conference in Learning Representations, 2020, Addis Ababa, Ethiopi
    corecore