2 research outputs found

    A novel strategy for speed up training for back propagation algorithm via dynamic adaptive the weight training in artificial neural network

    Get PDF
    The drawback of the Back Propagation (BP) algorithm is slow training and easily convergence to the local minimum and suffers from saturation training.To overcome those problems, we created a new dynamic function for each training rate and momentum term.In this study, we presented the (BPDRM) algorithm, which training with dynamic training rate and momentum term. Also in this study, a new strategy is proposed, which consists of multiple steps to avoid inflation in the gross weight when adding each training rate and momentum term as a dynamic function.In this proposed strategy, fitting is done by making a relationship between the dynamic training rate and the dynamic momentum.As a result, this study placed an implicit dynamic momentum term in the dynamic training rate.This αdmic = f(1/η_dmic ).This procedure kept the weights as moderate as possible (not to small or too large).The 2-dimensional XOR problem and buba data were used as benchmarks for testing the effects of the ‘new strategy’. All experiments were performed on Matlab software (2012a).From the experiment’s results, it is evident that the dynamic BPDRM algorithm provides a superior performance in terms of training and it provides faster training compared to the (BP) algorithm at same limited error

    Multi-layer neural networks with improved learning algorithms

    No full text
    corecore