10,298 research outputs found

    Exact Classification with Two-Layer Neural Nets

    Get PDF
    AbstractThis paper considers the classification properties of two-layer networks of McCulloch–Pitts units from a theoretical point of view. In particular we consider their ability to realise exactly, as opposed to approximate, bounded decision regions in R2. The main result shows that a two-layer network can realise exactly any finite union of bounded polyhedra in R2whose bounding lines lie in general position, except for some well-characterised exceptions. The exceptions are those unions whose boundaries contain a line which is “inconsistent,” as described in the text. Some of the results are valid for Rn,n⩾2, and the problem of generalising the main result to higher-dimensional situations is discussed

    Deep supervised learning using local errors

    Get PDF
    Error backpropagation is a highly effective mechanism for learning high-quality hierarchical features in deep networks. Updating the features or weights in one layer, however, requires waiting for the propagation of error signals from higher layers. Learning using delayed and non-local errors makes it hard to reconcile backpropagation with the learning mechanisms observed in biological neural networks as it requires the neurons to maintain a memory of the input long enough until the higher-layer errors arrive. In this paper, we propose an alternative learning mechanism where errors are generated locally in each layer using fixed, random auxiliary classifiers. Lower layers could thus be trained independently of higher layers and training could either proceed layer by layer, or simultaneously in all layers using local error information. We address biological plausibility concerns such as weight symmetry requirements and show that the proposed learning mechanism based on fixed, broad, and random tuning of each neuron to the classification categories outperforms the biologically-motivated feedback alignment learning technique on the MNIST, CIFAR10, and SVHN datasets, approaching the performance of standard backpropagation. Our approach highlights a potential biological mechanism for the supervised, or task-dependent, learning of feature hierarchies. In addition, we show that it is well suited for learning deep networks in custom hardware where it can drastically reduce memory traffic and data communication overheads
    • …
    corecore