2 research outputs found

    An FPGA Implementation of Kak's Instantaneously-Trained, Fast-Classification Neural Networks

    Get PDF
    Motivated by a biologically plausible short-memory sketchpad, Kak's Fast Classification (FC) neural networks are instantaneously trained by using a prescriptive training scheme. Both weights and the topology for an FC network are specified with only two presentations of the training samples. Compared with iterative learning algorithms such as Backpropagation (which may require many thousands of presentations of the training data), the training of FC networks is extremely fast and learning convergence is always guaranteed. Thus FC networks are suitable for applications where real-time classification and adaptive filtering are needed. In this paper we show that FC networks are "hardware friendly" for implementation on FPGAs. Their unique prescriptive learning scheme can be integrated with the hardware design of the FC network through parameterization and compile-time constant folding

    Implementation of Block-based Neural Networks on Reconfigurable Computing Platforms

    Get PDF
    Block-based Neural Networks (BbNNs) provide a flexible and modular architecture to support adaptive applications in dynamic environments. Reconfigurable computing (RC) platforms provide computational efficiency combined with flexibility. Hence, RC provides an ideal match to evolvable BbNN applications. BbNNs are very convenient to build once a library of neural network blocks is built. This library-based approach for the design of BbNNs is extremely useful to automate implementations of BbNNs and evaluate their performance on RC platforms. This is important because, for a given application there may be hundreds to thousands of candidate BbNN implementations possible and evaluating each of them for accuracy and performance, using software simulations will take a very long time, which would not be acceptable for adaptive environments. This thesis focuses on the development and characterization of a library of parameterized VHDL models of neural network blocks, which may be used to build any BbNN. The use of these models is demonstrated in the XOR pattern classification problem and mobile robot navigation problem. For a given application, one may be interested in fabricating an ASIC, once the weights and architecture of the BbNN is decided. Pointers to ASIC implementation of BbNNs with initial results are also included in this thesis
    corecore