51 research outputs found

    Efficient simulation scheme for a class of quantum optics experiments with non-negative Wigner representation

    Full text link
    We provide a scheme for efficient simulation of a broad class of quantum optics experiments. Our efficient simulation extends the continuous variable Gottesman-Knill theorem to a large class of non-Gaussian mixed states, thereby identifying that these non-Gaussian states are not an enabling resource for exponential quantum speed-up. Our results also provide an operationally motivated interpretation of negativity as non-classicality. We apply our scheme to the case of noisy single-photon-added-thermal-states to show that this class admits states with positive Wigner function but negative P -function that are not useful resource states for quantum computation.Comment: 14 pages, 1 figur

    Non-Vacuous Generalization Bounds at the ImageNet Scale: A PAC-Bayesian Compression Approach

    Full text link
    Modern neural networks are highly overparameterized, with capacity to substantially overfit to training data. Nevertheless, these networks often generalize well in practice. It has also been observed that trained networks can often be "compressed" to much smaller representations. The purpose of this paper is to connect these two empirical observations. Our main technical result is a generalization bound for compressed networks based on the compressed size. Combined with off-the-shelf compression algorithms, the bound leads to state of the art generalization guarantees; in particular, we provide the first non-vacuous generalization guarantees for realistic architectures applied to the ImageNet classification problem. As additional evidence connecting compression and generalization, we show that compressibility of models that tend to overfit is limited: We establish an absolute limit on expected compressibility as a function of expected generalization error, where the expectations are over the random choice of training examples. The bounds are complemented by empirical results that show an increase in overfitting implies an increase in the number of bits required to describe a trained network.Comment: 16 pages, 1 figure. Accepted at ICLR 201
    • …
    corecore