1 research outputs found

    Parallel perceptron learning on a single-channel broadcast communication model

    No full text
    [[abstract]]A parallel perceptron learning algorithm based upon a single-channel broadcast communication model has been proposed here. Since it can process training instances in parallel, instead of one by one in the conventional algorithm, large speedup can be expected. Theoretical analysis shows: with n processors, the average speedup ranges from O(log n) to O(n) under a variety of assumptions (where n is the number of training instances). Experimental results further show the actual average speedup is approximately being O(n0.91/log n). Extension to a bounded number of processors and to the backpropagtion learning have also been discussed
    corecore