953 research outputs found

    Distributed Task Encoding

    Full text link
    The rate region of the task-encoding problem for two correlated sources is characterized using a novel parametric family of dependence measures. The converse uses a new expression for the ρ\rho-th moment of the list size, which is derived using the relative α\alpha-entropy.Comment: 5 pages; accepted at ISIT 201

    Distributed Storage for Data Security

    Full text link
    We study the secrecy of a distributed storage system for passwords. The encoder, Alice, observes a length-n password and describes it using two hints, which she then stores in different locations. The legitimate receiver, Bob, observes both hints. The eavesdropper, Eve, sees only one of the hints; Alice cannot control which. We characterize the largest normalized (by n) exponent that we can guarantee for the number of guesses it takes Eve to guess the password subject to the constraint that either the number of guesses it takes Bob to guess the password or the size of the list that Bob must form to guarantee that it contain the password approach 1 as n tends to infinity.Comment: 5 pages, submitted to ITW 201

    Context-Aware Generative Adversarial Privacy

    Full text link
    Preserving the utility of published datasets while simultaneously providing provable privacy guarantees is a well-known challenge. On the one hand, context-free privacy solutions, such as differential privacy, provide strong privacy guarantees, but often lead to a significant reduction in utility. On the other hand, context-aware privacy solutions, such as information theoretic privacy, achieve an improved privacy-utility tradeoff, but assume that the data holder has access to dataset statistics. We circumvent these limitations by introducing a novel context-aware privacy framework called generative adversarial privacy (GAP). GAP leverages recent advancements in generative adversarial networks (GANs) to allow the data holder to learn privatization schemes from the dataset itself. Under GAP, learning the privacy mechanism is formulated as a constrained minimax game between two players: a privatizer that sanitizes the dataset in a way that limits the risk of inference attacks on the individuals' private variables, and an adversary that tries to infer the private variables from the sanitized dataset. To evaluate GAP's performance, we investigate two simple (yet canonical) statistical dataset models: (a) the binary data model, and (b) the binary Gaussian mixture model. For both models, we derive game-theoretically optimal minimax privacy mechanisms, and show that the privacy mechanisms learned from data (in a generative adversarial fashion) match the theoretically optimal ones. This demonstrates that our framework can be easily applied in practice, even in the absence of dataset statistics.Comment: Improved version of a paper accepted by Entropy Journal, Special Issue on Information Theory in Machine Learning and Data Scienc

    Wyner VAE: Joint and Conditional Generation with Succinct Common Representation Learning

    Full text link
    A new variational autoencoder (VAE) model is proposed that learns a succinct common representation of two correlated data variables for conditional and joint generation tasks. The proposed Wyner VAE model is based on two information theoretic problems---distributed simulation and channel synthesis---in which Wyner's common information arises as the fundamental limit of the succinctness of the common representation. The Wyner VAE decomposes a pair of correlated data variables into their common representation (e.g., a shared concept) and local representations that capture the remaining randomness (e.g., texture and style) in respective data variables by imposing the mutual information between the data variables and the common representation as a regularization term. The utility of the proposed approach is demonstrated through experiments for joint and conditional generation with and without style control using synthetic data and real images. Experimental results show that learning a succinct common representation achieves better generative performance and that the proposed model outperforms existing VAE variants and the variational information bottleneck method.Comment: 24 pages, 18 figure
    corecore