Skip to main content
Article thumbnail
Location of Repository

Using On-Line Arithmetic and Reconfiguration for Neuroprocessor Implementation

By Jean-luc Beuchat and Eduardo Sanchez


Abstract. Artificial neural networks can solve complex problems such as time Series prediction, handwritten pattern recognition or speech processing. Though software simulations are essential when one sets about to study a new algorithm, they cannot always fulfill real-time criteria required by some practical applications. Consequently, hardware implementations are of crucial import. The appearance of fast reconfigurable FPGA circuits brings about new paths for the design of neuroprocessors. A learning algorithm is divided into different steps that are associated with specific FPGA configurations. The training process then consists of alternating computing and reconfiguration stages. Such a method leads to an optimal use of hardware resources. This paradigm is applied to the design of a neuroprocessor implementing multilayer perceptrons with on-chip training and pruning. All arithmetic operations are carried out with on-line operators. We also describe the principles of the hardware architecture, focusing in particular on the pruning mechanisms.

Year: 2013
OAI identifier: oai:CiteSeerX.psu:
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • (external link)
  • http://www.cipher.risk.tsukuba... (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.