1 research outputs found

    Fast learning of biased patterns in neural networks.

    No full text
    Usual neural network gradient descent training algorithms require training times of the same order as the number of neurons N if the patterns are biased. In this paper, modified algorithms are presented which require training times equal to those in unbiased cases which are of order 1. Exact convergence proofs are given. Gain parameters which produce minimal learning times in large networks are computed by replica methods. It is demonstrated how these modified algorithms are applied in order to produce four types of solutions to the learning problem: 1. A solution with all internal fields equal to the desired output, 2. The Adaline (or pseudo-inverse) solution, 3. The perceptron of optimal stability without threshold and 4. The perceptron of optimal stability with threshold
    corecore