Abstract — Artificial Neural networks have been part of an attempt to emulate the learning curve of the human nervous system. However the vital difference of, nervous system being highly parallel and computer processor units remaining largely sequential persists. Here an attempt is made to bridge that gap with the help of Graphics Processing Units (GPUs) which are designed to be highly parallel. In particular Back propagation networks are considered which use supervised learning. Back-propagation algorithms, with no data dependencies are embarrassingly parallel and hence only a totally parallel system can exploit it fully. However, it has also been observed that GPUs underperform when either significant overhead in calculations is incurred or algorithm is not sufficiently parallel. Index Terms — feed forward neural networks, Graphics processing units, Back-propagation networks, data-parallelis
To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.