3,179 research outputs found

    Sub-computable Boundedness Randomness

    Full text link
    This paper defines a new notion of bounded computable randomness for certain classes of sub-computable functions which lack a universal machine. In particular, we define such versions of randomness for primitive recursive functions and for PSPACE functions. These new notions are robust in that there are equivalent formulations in terms of (1) Martin-L\"of tests, (2) Kolmogorov complexity, and (3) martingales. We show these notions can be equivalently defined with prefix-free Kolmogorov complexity. We prove that one direction of van Lambalgen's theorem holds for relative computability, but the other direction fails. We discuss statistical properties of these notions of randomness

    On Resource-bounded versions of the van Lambalgen theorem

    Full text link
    The van Lambalgen theorem is a surprising result in algorithmic information theory concerning the symmetry of relative randomness. It establishes that for any pair of infinite sequences AA and BB, BB is Martin-L\"of random and AA is Martin-L\"of random relative to BB if and only if the interleaved sequence A⊎BA \uplus B is Martin-L\"of random. This implies that AA is relative random to BB if and only if BB is random relative to AA \cite{vanLambalgen}, \cite{Nies09}, \cite{HirschfeldtBook}. This paper studies the validity of this phenomenon for different notions of time-bounded relative randomness. We prove the classical van Lambalgen theorem using martingales and Kolmogorov compressibility. We establish the failure of relative randomness in these settings, for both time-bounded martingales and time-bounded Kolmogorov complexity. We adapt our classical proofs when applicable to the time-bounded setting, and construct counterexamples when they fail. The mode of failure of the theorem may depend on the notion of time-bounded randomness

    Non-Vacuous Generalization Bounds at the ImageNet Scale: A PAC-Bayesian Compression Approach

    Full text link
    Modern neural networks are highly overparameterized, with capacity to substantially overfit to training data. Nevertheless, these networks often generalize well in practice. It has also been observed that trained networks can often be "compressed" to much smaller representations. The purpose of this paper is to connect these two empirical observations. Our main technical result is a generalization bound for compressed networks based on the compressed size. Combined with off-the-shelf compression algorithms, the bound leads to state of the art generalization guarantees; in particular, we provide the first non-vacuous generalization guarantees for realistic architectures applied to the ImageNet classification problem. As additional evidence connecting compression and generalization, we show that compressibility of models that tend to overfit is limited: We establish an absolute limit on expected compressibility as a function of expected generalization error, where the expectations are over the random choice of training examples. The bounds are complemented by empirical results that show an increase in overfitting implies an increase in the number of bits required to describe a trained network.Comment: 16 pages, 1 figure. Accepted at ICLR 201
    • …
    corecore