1,374 research outputs found

    Analog Printed Spiking Neuromorphic Circuit

    Get PDF
    Biologically-inspired Spiking Neural Networks have emerged as a promising avenue for energy-efficient, high-performance neuromorphic computing. With the demand for highly-customized and cost-effective solutions in emerging application domains like soft robotics, wearables, or IoT-devices, Printed Electronics has emerged as an alternative to traditional silicon technologies leveraging soft materials and flexible substrates. In this paper, we propose an energy-efficient analog printed spiking neuromorphic circuit and a corresponding learning algorithm. Simulations on 13 benchmark datasets show an average of 3.86× power improvement with similar classification accuracy compared to previous works

    Training multi-layer spiking neural networks with plastic synaptic weights and delays

    Get PDF
    Spiking neural networks are usually considered as the third generation of neural networks, which hold the potential of ultra-low power consumption on corresponding hardware platforms and are very suitable for temporal information processing. However, how to efficiently train the spiking neural networks remains an open question, and most existing learning methods only consider the plasticity of synaptic weights. In this paper, we proposed a new supervised learning algorithm for multiple-layer spiking neural networks based on the typical SpikeProp method. In the proposed method, both the synaptic weights and delays are considered as adjustable parameters to improve both the biological plausibility and the learning performance. In addition, the proposed method inherits the advantages of SpikeProp, which can make full use of the temporal information of spikes. Various experiments are conducted to verify the performance of the proposed method, and the results demonstrate that the proposed method achieves a competitive learning performance compared with the existing related works. Finally, the differences between the proposed method and the existing mainstream multi-layer training algorithms are discussed

    Efficient Structure Slimming for Spiking Neural Networks

    Get PDF
    Spiking neural networks (SNNs) are deeply inspired by biological neural information systems. Compared to convolutional neural networks (CNNs), SNNs are low power consumption because of their spike based information processing mechanism. However, most of the current structures of SNNs are fully-connected or converted from deep CNNs which poses redundancy connections. While the structure and topology in human brain systems are sparse and efficient. This paper aims at taking full advantage of sparse structure and low power consumption which lie in human brain and proposed efficient structure slimming methods. Inspired by the development of biological neural network structures, this paper designed types of structure slimming methods including neuron pruning and channel pruning. In addition to pruning, this paper also considers the growth and development of the nervous system. Through iterative application of the proposed neural pruning and rewiring algorithms, experimental evaluations on CIFAR-10, CIFAR-100, and DVS-Gesture datasets demonstrate the effectiveness of the structure slimming methods. When the parameter count is reduced to only about 10% of the original, the performance decreases by less than 1%

    Towards Neuromorphic Gradient Descent: Exact Gradients and Low-Variance Online Estimates for Spiking Neural Networks

    Get PDF
    Spiking Neural Networks (SNNs) are biologically-plausible models that can run on low-powered non-Von Neumann neuromorphic hardware, positioning them as promising alternatives to conventional Deep Neural Networks (DNNs) for energy-efficient edge computing and robotics. Over the past few years, the Gradient Descent (GD) and Error Backpropagation (BP) algorithms used in DNNs have inspired various training methods for SNNs. However, the non-local and the reverse nature of BP, combined with the inherent non-differentiability of spikes, represent fundamental obstacles to computing gradients with SNNs directly on neuromorphic hardware. Therefore, novel approaches are required to overcome the limitations of GD and BP and enable online gradient computation on neuromorphic hardware. In this thesis, I address the limitations of GD and BP with SNNs by proposing three algorithms. First, I extend a recent method that computes exact gradients with temporally-coded SNNs by relaxing the firing constraint of temporal coding and allowing multiple spikes per neuron. My proposed method generalizes the computation of exact gradients with SNNs and enhances the tradeoffs between performance and various other aspects of spiking neurons. Next, I introduce a novel alternative to BP that computes low-variance gradient estimates in a local and online manner. Compared to other alternatives to BP, the proposed method demonstrates an improved convergence rate and increased performance with DNNs. Finally, I combine these two methods and propose an algorithm that estimates gradients with SNNs in a manner that is compatible with the constraints of neuromorphic hardware. My empirical results demonstrate the effectiveness of the resulting algorithm in training SNNs without performing BP

    FireFly: A High-Throughput and Reconfigurable Hardware Accelerator for Spiking Neural Networks

    Full text link
    Spiking neural networks (SNNs) have been widely used due to their strong biological interpretability and high energy efficiency. With the introduction of the backpropagation algorithm and surrogate gradient, the structure of spiking neural networks has become more complex, and the performance gap with artificial neural networks has gradually decreased. However, most SNN hardware implementations for field-programmable gate arrays (FPGAs) cannot meet arithmetic or memory efficiency requirements, which significantly restricts the development of SNNs. They do not delve into the arithmetic operations between the binary spikes and synaptic weights or assume unlimited on-chip RAM resources by using overly expensive devices on small tasks. To improve arithmetic efficiency, we analyze the neural dynamics of spiking neurons, generalize the SNN arithmetic operation to the multiplex-accumulate operation, and propose a high-performance implementation of such operation by utilizing the DSP48E2 hard block in Xilinx Ultrascale FPGAs. To improve memory efficiency, we design a memory system to enable efficient synaptic weights and membrane voltage memory access with reasonable on-chip RAM consumption. Combining the above two improvements, we propose an FPGA accelerator that can process spikes generated by the firing neuron on-the-fly (FireFly). FireFly is implemented on several FPGA edge devices with limited resources but still guarantees a peak performance of 5.53TSOP/s at 300MHz. As a lightweight accelerator, FireFly achieves the highest computational density efficiency compared with existing research using large FPGA devices

    On the path integration system of insects: there and back again

    Get PDF
    Navigation is an essential capability of animate organisms and robots. Among animate organisms of particular interest are insects because they are capable of a variety of navigation competencies solving challenging problems with limited resources, thereby providing inspiration for robot navigation. Ants, bees and other insects are able to return to their nest using a navigation strategy known as path integration. During path integration, the animal maintains a running estimate of the distance and direction to its nest as it travels. This estimate, known as the `home vector', enables the animal to return to its nest. Path integration was the technique used by sea navigators to cross the open seas in the past. To perform path integration, both sailors and insects need access to two pieces of information, their direction and their speed of motion over time. Neurons encoding the heading and speed have been found to converge on a highly conserved region of the insect brain, the central complex. It is, therefore, believed that the central complex is key to the computations pertaining to path integration. However, several questions remain about the exact structure of the neuronal circuit that tracks the animal's heading, how it differs between insect species, and how the speed and direction are integrated into a home vector and maintained in memory. In this thesis, I have combined behavioural, anatomical, and physiological data with computational modelling and agent simulations to tackle these questions. Analysis of the internal compass circuit of two insect species with highly divergent ecologies, the fruit fly Drosophila melanogaster and the desert locust Schistocerca gregaria, revealed that despite 400 million years of evolutionary divergence, both species share a fundamentally common internal compass circuit that keeps track of the animal's heading. However, subtle differences in the neuronal morphologies result in distinct circuit dynamics adapted to the ecology of each species, thereby providing insights into how neural circuits evolved to accommodate species-specific behaviours. The fast-moving insects need to update their home vector memory continuously as they move, yet they can remember it for several hours. This conjunction of fast updating and long persistence of the home vector does not directly map to current short, mid, and long-term memory accounts. An extensive literature review revealed a lack of available memory models that could support the home vector memory requirements. A comparison of existing behavioural data with the homing behaviour of simulated robot agents illustrated that the prevalent hypothesis, which posits that the neural substrate of the path integration memory is a bump attractor network, is contradicted by behavioural evidence. An investigation of the type of memory utilised during path integration revealed that cold-induced anaesthesia disrupts the ability of ants to return to their nest, but it does not eliminate their ability to move in the correct homing direction. Using computational modelling and simulated agents, I argue that the best explanation for this phenomenon is not two separate memories differently affected by temperature but a shared memory that encodes both the direction and distance. The results presented in this thesis shed some more light on the labyrinth that researchers of animal navigation have been exploring in their attempts to unravel a few more rounds of Ariadne's thread back to its origin. The findings provide valuable insights into the path integration system of insects and inspiration for future memory research, advancing path integration techniques in robotics, and developing novel neuromorphic solutions to computational problems

    Spike timing reshapes robustness against attacks in spiking neural networks

    Full text link
    The success of deep learning in the past decade is partially shrouded in the shadow of adversarial attacks. In contrast, the brain is far more robust at complex cognitive tasks. Utilizing the advantage that neurons in the brain communicate via spikes, spiking neural networks (SNNs) are emerging as a new type of neural network model, boosting the frontier of theoretical investigation and empirical application of artificial neural networks and deep learning. Neuroscience research proposes that the precise timing of neural spikes plays an important role in the information coding and sensory processing of the biological brain. However, the role of spike timing in SNNs is less considered and far from understood. Here we systematically explored the timing mechanism of spike coding in SNNs, focusing on the robustness of the system against various types of attacks. We found that SNNs can achieve higher robustness improvement using the coding principle of precise spike timing in neural encoding and decoding, facilitated by different learning rules. Our results suggest that the utility of spike timing coding in SNNs could improve the robustness against attacks, providing a new approach to reliable coding principles for developing next-generation brain-inspired deep learning

    Uncovering the Representation of Spiking Neural Networks Trained with Surrogate Gradient

    Full text link
    Spiking Neural Networks (SNNs) are recognized as the candidate for the next-generation neural networks due to their bio-plausibility and energy efficiency. Recently, researchers have demonstrated that SNNs are able to achieve nearly state-of-the-art performance in image recognition tasks using surrogate gradient training. However, some essential questions exist pertaining to SNNs that are little studied: Do SNNs trained with surrogate gradient learn different representations from traditional Artificial Neural Networks (ANNs)? Does the time dimension in SNNs provide unique representation power? In this paper, we aim to answer these questions by conducting a representation similarity analysis between SNNs and ANNs using Centered Kernel Alignment (CKA). We start by analyzing the spatial dimension of the networks, including both the width and the depth. Furthermore, our analysis of residual connections shows that SNNs learn a periodic pattern, which rectifies the representations in SNNs to be ANN-like. We additionally investigate the effect of the time dimension on SNN representation, finding that deeper layers encourage more dynamics along the time dimension. We also investigate the impact of input data such as event-stream data and adversarial attacks. Our work uncovers a host of new findings of representations in SNNs. We hope this work will inspire future research to fully comprehend the representation power of SNNs. Code is released at https://github.com/Intelligent-Computing-Lab-Yale/SNNCKA.Comment: Published in Transactions on Machine Learning Research (TMLR

    Heterogeneous Integration of In-Memory Analog Computing Architectures with Tensor Processing Units

    Full text link
    Tensor processing units (TPUs), specialized hardware accelerators for machine learning tasks, have shown significant performance improvements when executing convolutional layers in convolutional neural networks (CNNs). However, they struggle to maintain the same efficiency in fully connected (FC) layers, leading to suboptimal hardware utilization. In-memory analog computing (IMAC) architectures, on the other hand, have demonstrated notable speedup in executing FC layers. This paper introduces a novel, heterogeneous, mixed-signal, and mixed-precision architecture that integrates an IMAC unit with an edge TPU to enhance mobile CNN performance. To leverage the strengths of TPUs for convolutional layers and IMAC circuits for dense layers, we propose a unified learning algorithm that incorporates mixed-precision training techniques to mitigate potential accuracy drops when deploying models on the TPU-IMAC architecture. The simulations demonstrate that the TPU-IMAC configuration achieves up to 2.59×2.59\times performance improvements, and 88%88\% memory reductions compared to conventional TPU architectures for various CNN models while maintaining comparable accuracy. The TPU-IMAC architecture shows potential for various applications where energy efficiency and high performance are essential, such as edge computing and real-time processing in mobile devices. The unified training algorithm and the integration of IMAC and TPU architectures contribute to the potential impact of this research on the broader machine learning landscape
    • …
    corecore