2 research outputs found

    Performance Analysis of a Pipelined Backpropagation Parallel Algorithm

    No full text
    The supervised training of feedforward neural networks is often based on the error backpropagation algorithm. Our main purpose is to consider the successive layers of a feedforward neural network as the stages of a pipeline which is used to improve the efficiency of the parallel algorithm. A simple placement rule will be presented in order to take advantage of simultaneous executions of the calculations on each layer of the network. The analytic expressions show that the parallelization is efficient. Moreover, they indicate that the performances of this implementation are almost independent of the neural network architecture. Their simplicity assures easy prediction of learning performance on a parallel machine for any neural network architecture. The experimental results are in agreement with analytical estimates
    corecore