1 research outputs found

    Improving neural net convergence

    No full text
    This paper shows a certain equivalence between the 3-layer feed forward back-propagation neural net of Rumelhart et al. and the committee net of Nilsson. This is used to produce an improvement in the performance of the former. It is found that (a) the number of epochs taken is reduced by a factor of between 6 and 10, (b) the same taken is reduced by a factor of about 20, and (c) the net converges under conditions when the back-propagation algorithm is trapped in local minima and fails to converge
    corecore