1 research outputs found

    General Mapping of Feed-Forward Neural Networks onto an MIMD Computer

    No full text
    This paper describes a scheme for mapping the back propagation algorithm onto an MIMD computer with 2D-torus network. We propose a new strategy that allows arbitrary assignment of processors to the multiple degrees of back propagation parallelism (training set parallelism, pipelining and node parallelism). Thus, the method allows a flexible mapping that fits well to various neural network applications. Moreover, we consider the effect of the weight update interval on the number of iterations required for convergence. The results from implementations on a Fujitsu AP1000 show that it may be beneficial to make a mapping involving contention in the communication network. Further, even though the convergence in number of iterations is slower for parallel implementations, compared to a serial program, parallel processing can be a means of achieving considerable speedup. 1 Introduction Parallel processing is mandatory to reduce the long training time for neural network learning algorithms. I..
    corecore