3,530 research outputs found

    Training a perceptron by a bit sequence: Storage capacity

    Full text link
    A perceptron is trained by a random bit sequence. In comparison to the corresponding classification problem, the storage capacity decreases to alpha_c=1.70\pm 0.02 due to correlations between input and output bits. The numerical results are supported by a signal to noise analysis of Hebbian weights.Comment: LaTeX, 13 pages incl. 4 figures and 1 tabl

    Analytical and Numerical Study of Internal Representations in Multilayer Neural Networks with Binary Weights

    Full text link
    We study the weight space structure of the parity machine with binary weights by deriving the distribution of volumes associated to the internal representations of the learning examples. The learning behaviour and the symmetry breaking transition are analyzed and the results are found to be in very good agreement with extended numerical simulations.Comment: revtex, 20 pages + 9 figures, to appear in Phys. Rev.

    Multifractal Analysis of the Coupling Space of Feed-Forward Neural Networks

    Full text link
    Random input patterns induce a partition of the coupling space of feed-forward neural networks into different cells according to the generated output sequence. For the perceptron this partition forms a random multifractal for which the spectrum f(α)f(\alpha) can be calculated analytically using the replica trick. Phase transition in the multifractal spectrum correspond to the crossover from percolating to non-percolating cell sizes. Instabilities of negative moments are related to the VC-dimension.Comment: 10 pages, Latex, submitted to PR

    The VC-Dimension versus the Statistical Capacity of Multilayer Networks

    Get PDF
    A general relationship is developed between the VC-dimension and the statistical lower epsilon-capacity which shows that the VC-dimension can be lower bounded (in order) by the statistical lower epsilon-capacity of a network trained with random samples. This relationship explains quantitatively how generalization takes place after memorization, and relates the concept of generalization (consistency) with the capacity of the optimal classifier over a class of classifiers with the same structure and the capacity of the Bayesian classifier. Furthermore, it provides a general methodology to evaluate a lower bound for the VC-dimension of feedforward multilayer neural networks. This general methodology is applied to two types of networks which are important for hardware implementations: two layer (N - 2L - 1) networks with binary weights, integer thresholds for the hidden units and zero threshold for the output unit, and a single neuron ((N - 1) networks) with binary weigths and a zero threshold. Specifically, we obtain O(W/lnL)≤ d_2 ≤ O(W), and d_1 ~ O(N). Here W is the total number of weights of the (N - 2L - 1) networks. d_1 and d_2 represent the VC-dimensions for the (N - 1) and (N - 2L - 1) networks respectively
    • …
    corecore