2 research outputs found

    PAC-learning gains of Turing machines over circuits and neural networks

    Full text link
    A caveat to many applications of the current Deep Learning approach is the need for large-scale data. One improvement suggested by Kolmogorov Complexity results is to apply the minimum description length principle with computationally universal models. We study the potential gains in sample efficiency that this approach can bring in principle. We use polynomial-time Turing machines to represent computationally universal models and Boolean circuits to represent Artificial Neural Networks (ANNs) acting on finite-precision digits. Our analysis unravels direct links between our question and Computational Complexity results. We provide lower and upper bounds on the potential gains in sample efficiency between the MDL applied with Turing machines instead of ANNs. Our bounds depend on the bit-size of the input of the Boolean function to be learned. Furthermore, we highlight close relationships between classical open problems in Circuit Complexity and the tightness of these

    Electronic Colloquium on Computational Complexity, Report No. 32 (2005) Reviewing Bounds on the Circuit Size of the Hardest Functions

    Get PDF
    In this paper we review the known bounds for L(n), the circuit size complexity of the hardest Boolean function on n input bits. The best known bounds appear to be 2n log n (1 + n
    corecore