1,556 research outputs found

    Deterministic compression with uncertain priors

    Full text link
    a

    The Price of Uncertain Priors in Source Coding

    Full text link
    We consider the problem of one-way communication when the recipient does not know exactly the distribution that the messages are drawn from, but has a "prior" distribution that is known to be close to the source distribution, a problem first considered by Juba et al. We consider the question of how much longer the messages need to be in order to cope with the uncertainty about the receiver's prior and the source distribution, respectively, as compared to the standard source coding problem. We consider two variants of this uncertain priors problem: the original setting of Juba et al. in which the receiver is required to correctly recover the message with probability 1, and a setting introduced by Haramaty and Sudan, in which the receiver is permitted to fail with some probability ϵ\epsilon. In both settings, we obtain lower bounds that are tight up to logarithmically smaller terms. In the latter setting, we furthermore present a variant of the coding scheme of Juba et al. with an overhead of logα+log1/ϵ+1\log\alpha+\log 1/\epsilon+1 bits, thus also establishing the nearly tight upper bound.Comment: To appear in IEEE Transactions on Information Theor

    Bayesian Compression for Deep Learning

    Get PDF
    Compression and computational efficiency in deep learning have become a problem of great significance. In this work, we argue that the most principled and effective way to attack this problem is by adopting a Bayesian point of view, where through sparsity inducing priors we prune large parts of the network. We introduce two novelties in this paper: 1) we use hierarchical priors to prune nodes instead of individual weights, and 2) we use the posterior uncertainties to determine the optimal fixed point precision to encode the weights. Both factors significantly contribute to achieving the state of the art in terms of compression rates, while still staying competitive with methods designed to optimize for speed or energy efficiency.Comment: Published as a conference paper at NIPS 201

    Wyner VAE: Joint and Conditional Generation with Succinct Common Representation Learning

    Full text link
    A new variational autoencoder (VAE) model is proposed that learns a succinct common representation of two correlated data variables for conditional and joint generation tasks. The proposed Wyner VAE model is based on two information theoretic problems---distributed simulation and channel synthesis---in which Wyner's common information arises as the fundamental limit of the succinctness of the common representation. The Wyner VAE decomposes a pair of correlated data variables into their common representation (e.g., a shared concept) and local representations that capture the remaining randomness (e.g., texture and style) in respective data variables by imposing the mutual information between the data variables and the common representation as a regularization term. The utility of the proposed approach is demonstrated through experiments for joint and conditional generation with and without style control using synthetic data and real images. Experimental results show that learning a succinct common representation achieves better generative performance and that the proposed model outperforms existing VAE variants and the variational information bottleneck method.Comment: 24 pages, 18 figure
    corecore