48 research outputs found

    Optimizing the energy consumption of spiking neural networks for neuromorphic applications

    Full text link
    In the last few years, spiking neural networks have been demonstrated to perform on par with regular convolutional neural networks. Several works have proposed methods to convert a pre-trained CNN to a Spiking CNN without a significant sacrifice of performance. We demonstrate first that quantization-aware training of CNNs leads to better accuracy in SNNs. One of the benefits of converting CNNs to spiking CNNs is to leverage the sparse computation of SNNs and consequently perform equivalent computation at a lower energy consumption. Here we propose an efficient optimization strategy to train spiking networks at lower energy consumption, while maintaining similar accuracy levels. We demonstrate results on the MNIST-DVS and CIFAR-10 datasets

    Hardware-efficient on-line learning through pipelined truncated-error backpropagation in binary-state networks

    Get PDF
    Artificial neural networks (ANNs) trained using backpropagation are powerful learning architectures that have achieved state-of-the-art performance in various benchmarks. Significant effort has been devoted to developing custom silicon devices to accelerate inference in ANNs. Accelerating the training phase, however, has attracted relatively little attention. In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined backpropagation. Learning is performed in parallel with inference in the forward pass, removing the need for an explicit backward pass and requiring no extra weight lookup. By using binary state variables in the feedforward network and ternary errors in truncated-error backpropagation, the need for any multiplications in the forward and backward passes is removed, and memory requirements for the pipelining are drastically reduced. Further reduction in addition operations owing to the sparsity in the forward neural and backpropagating error signal paths contributes to highly efficient hardware implementation. For proof-of-concept validation, we demonstrate on-line learning of MNIST handwritten digit classification on a Spartan 6 FPGA interfacing with an external 1Gb DDR2 DRAM, that shows small degradation in test error performance compared to an equivalently sized binary ANN trained off-line using standard back-propagation and exact errors. Our results highlight an attractive synergy between pipelined backpropagation and binary-state networks in substantially reducing computation and memory requirements, making pipelined on-line learning practical in deep networks.Comment: Now also consider 0/1 binary activations. Memory access statistics reporte

    Event-driven spiking convolutional neural network

    Get PDF
    The invention relates to an event-driven spiking convolutional neural network, comprising a plurality of layers, wherein each layer comprises - a kernel module configured to store and to process in an event-driven fashion kernel values of at least one convolution kernel, - a neuron module configured to store and to process in an event-driven fashion neuron states of neurons of the network, and particularly to output spike events generated from the updated neurons, - a memory mapper configured to determine neurons to which an incoming a spike event from a source layer projects to by means of a convolution with the at least one convolution kernel and wherein neuron states of said determined neurons are to be updated with applicable kernel values of the at least one convolution kernel, wherein the memory mapper is configured to process incoming spike events in an event-driven fashion.</p

    Event-driven integrated circuit having interface system

    Get PDF
    The present invention relates to an event-driven integrated circuit, comprising a sensor, an interface system and a processor. By means of the solution, events with addresses are asynchronously generated and processed. The interface system comprises a replication module, a fusion module, a secondary sampling module, a region of interest module, an event routing module, etc., which constitute a programmable daisy chain. The sensor, the interface system and the processor are coupled on a single chip by means of an adapter board, and different bare dies can be manufactured by using different processes. By means of the solution, signal loss and noise interference in the prior art can be eliminated, and the technical effects of high-speed processing of signals, smaller footprints of chips, reduced manufacturing costs, etc., are achieved, thereby solving the technical problems in the prior art of a large chip area and a low signal processing capability. In addition, by means of the smart design of the interface system, the functions and configurability of the interface system are enriched, thereby providing various advantages in terms of power consumption, functions and speed in subsequent processing

    Adversarial attacks on spiking convolutional neural networks for event-based vision

    Full text link
    Event-based dynamic vision sensors provide very sparse output in the form of spikes, which makes them suitable for low-power applications. Convolutional spiking neural networks model such event-based data and develop their full energy-saving potential when deployed on asynchronous neuromorphic hardware. Event-based vision being a nascent field, the sensitivity of spiking neural networks to potentially malicious adversarial attacks has received little attention so far. We show how white-box adversarial attack algorithms can be adapted to the discrete and sparse nature of event-based visual data, and demonstrate smaller perturbation magnitudes at higher success rates than the current state-of-the-art algorithms. For the first time, we also verify the effectiveness of these perturbations directly on neuromorphic hardware. Finally, we discuss the properties of the resulting perturbations, the effect of adversarial training as a defense strategy, and future directions

    Forward Table-Based Presynaptic Event-Triggered Spike-Timing-Dependent Plasticity

    Full text link
    Spike-timing-dependent plasticity (STDP) incurs both causal and acausal synaptic weight updates, for negative and positive time differences between pre-synaptic and post-synaptic spike events. For realizing such updates in neuromorphic hardware, current implementations either require forward and reverse lookup access to the synaptic connectivity table, or rely on memory-intensive architectures such as crossbar arrays. We present a novel method for realizing both causal and acausal weight updates using only forward lookup access of the synaptic connectivity table, permitting memory-efficient implementation. A simplified implementation in FPGA, using a single timer variable for each neuron, closely approximates exact STDP cumulative weight updates for neuron refractory periods greater than 10 ms, and reduces to exact STDP for refractory periods greater than the STDP time window. Compared to conventional crossbar implementation, the forward table-based implementation leads to substantial memory savings for sparsely connected networks supporting scalable neuromorphic systems with fully reconfigurable synaptic connectivity and plasticity.Comment: Submitted to BioCAS 201
    corecore