5 research outputs found
A Reconfigurable Mixed-signal Implementation of a Neuromorphic ADC
We present a neuromorphic Analogue-to-Digital Converter (ADC), which uses
integrate-and-fire (I&F) neurons as the encoders of the analogue signal, with
modulated inhibitions to decohere the neuronal spikes trains. The architecture
consists of an analogue chip and a control module. The analogue chip comprises
two scan chains and a twodimensional integrate-and-fire neuronal array.
Individual neurons are accessed via the chains one by one without any encoder
decoder or arbiter. The control module is implemented on an FPGA (Field
Programmable Gate Array), which sends scan enable signals to the scan chains
and controls the inhibition for individual neurons. Since the control module is
implemented on an FPGA, it can be easily reconfigured. Additionally, we propose
a pulse width modulation methodology for the lateral inhibition, which makes
use of different pulse widths indicating different strengths of inhibition for
each individual neuron to decohere neuronal spikes. Software simulations in
this paper tested the robustness of the proposed ADC architecture to fixed
random noise. A circuit simulation using ten neurons shows the performance and
the feasibility of the architecture.Comment: BioCAS-201
NengoFPGA: an FPGA Backend for the Nengo Neural Simulator
Low-power, high-speed neural networks are critical for providing deployable embedded AI
applications at the edge. We describe a Xilinx FPGA implementation of Neural Engineering
Framework (NEF) networks with online learning that outperforms mobile Nvidia GPU
implementations by an order of magnitude or more. Specifically, we provide an embedded
Python-capable PYNQ FPGA implementation supported with a Xilinx Vivado High-Level
Synthesis (HLS) workflow that allows sub-millisecond implementation of adaptive neural
networks with low-latency, direct I/O access to the physical world. The outcome of this
work is NengoFPGA, a seamless and user-friendly extension to the neural compiler Python
package Nengo. To reduce memory requirements and improve performance we tune the
precision of the different intermediate variables in the code to achieve competitive absolute
accuracy against slower and larger floating-point reference designs. The online learning
component of the neural network exploits immediate feedback to adjust the network weights
to best support a given arithmetic precision. As the space of possible design configurations
of such quantized networks is vast and is subject to a target accuracy constraint, we use
the Hyperopt hyper-parameter tuning tool instead of manual search to find Pareto optimal
designs. Specifically, we are able to generate the optimized designs in under 500 short
iterations of Vivado HLS C synthesis before running the complete Vivado place-and-route
phase on that subset, a much longer process not conducive to rapid exploration. For neural
network populations of 64–4096 neurons and 1–8 representational dimensions our optimized
FPGA implementation generated by Hyperopt has a speedup of 10–484× over a competing
cuBLAS implementation on the Jetson TX1 GPU while using 2.4–9.5× less power. Our
speedups are a result of HLS-specific reformulation (15× improvement), precision adaptation
(3× improvement), and low-latency direct I/O access (1000× improvement)
A compact neural core for digital implementation of the Neural Engineering Framework
The Neural Engineering Framework (NEF) is a tool that is capable of synthesising large-scale cognitive systems from subnetworks; and it has been used to construct SPAUN, which is the first brain model capable of performing cognitive tasks. It has been implemented on computers using high-level programming languages. However the software model runs much slower than real time, and therefore is not capable of performing for applications that need real-time control, such as interactive robotic systems. Here we present a compact neural core for digital implementation of the NEF on Field Programmable Gate Arrays (FPGAs) in real time. The proposed digital neural core consists of 64 neurons that are instantiated by a single physical neuron using a time-multiplexing approach. As NEF intrinsically uses a spike rate-encoding paradigm, rather than implementing spiking neurons and then measuring their firing rates, we chose to implement NEF with neurons that compute their firing rate directly. The neuron is efficiently implemented using a 9-bit fixed-point multiplier without the requirement of memory, the bandwidth of memory being the bottleneck for the time-multiplexing approach. The neural core uses only a fraction of the hardware resources in a commercial-off-the-shelf FPGA (even an entry level one) and can be easily programmed for different mathematical computations. Multiple cores can easily be combined to build real-time large-scale cognitive neural networks using the Neural Engineering Framework