7,241 research outputs found
OR Residual Connection Achieving Comparable Accuracy to ADD Residual Connection in Deep Residual Spiking Neural Networks
Spiking Neural Networks (SNNs) have garnered substantial attention in
brain-like computing for their biological fidelity and the capacity to execute
energy-efficient spike-driven operations. As the demand for heightened
performance in SNNs surges, the trend towards training deeper networks becomes
imperative, while residual learning stands as a pivotal method for training
deep neural networks. In our investigation, we identified that the SEW-ResNet,
a prominent representative of deep residual spiking neural networks,
incorporates non-event-driven operations. To rectify this, we introduce the OR
Residual connection (ORRC) to the architecture. Additionally, we propose the
Synergistic Attention (SynA) module, an amalgamation of the Inhibitory
Attention (IA) module and the Multi-dimensional Attention (MA) module, to
offset energy loss stemming from high quantization. When integrating SynA into
the network, we observed the phenomenon of "natural pruning", where after
training, some or all of the shortcuts in the network naturally drop out
without affecting the model's classification accuracy. This significantly
reduces computational overhead and makes it more suitable for deployment on
edge devices. Experimental results on various public datasets confirmed that
the SynA enhanced OR-Spiking ResNet achieved single-sample classification with
as little as 0.8 spikes per neuron. Moreover, when compared to other spike
residual models, it exhibited higher accuracy and lower power consumption.
Codes are available at https://github.com/Ym-Shan/ORRC-SynA-natural-pruning.Comment: 16 pages, 8 figures and 11table
Multi-Level Firing with Spiking DS-ResNet: Enabling Better and Deeper Directly-Trained Spiking Neural Networks
Spiking neural networks (SNNs) are bio-inspired neural networks with
asynchronous discrete and sparse characteristics, which have increasingly
manifested their superiority in low energy consumption. Recent research is
devoted to utilizing spatio-temporal information to directly train SNNs by
backpropagation. However, the binary and non-differentiable properties of spike
activities force directly trained SNNs to suffer from serious gradient
vanishing and network degradation, which greatly limits the performance of
directly trained SNNs and prevents them from going deeper. In this paper, we
propose a multi-level firing (MLF) method based on the existing spatio-temporal
back propagation (STBP) method, and spiking dormant-suppressed residual network
(spiking DS-ResNet). MLF enables more efficient gradient propagation and the
incremental expression ability of the neurons. Spiking DS-ResNet can
efficiently perform identity mapping of discrete spikes, as well as provide a
more suitable connection for gradient propagation in deep SNNs. With the
proposed method, our model achieves superior performances on a non-neuromorphic
dataset and two neuromorphic datasets with much fewer trainable parameters and
demonstrates the great ability to combat the gradient vanishing and degradation
problem in deep SNNs.Comment: Accepted by the Thirty-First International Joint Conference on
Artificial Intelligence (IJCAI-22
Going Deeper in Spiking Neural Networks: VGG and Residual Architectures
Over the past few years, Spiking Neural Networks (SNNs) have become popular
as a possible pathway to enable low-power event-driven neuromorphic hardware.
However, their application in machine learning have largely been limited to
very shallow neural network architectures for simple problems. In this paper,
we propose a novel algorithmic technique for generating an SNN with a deep
architecture, and demonstrate its effectiveness on complex visual recognition
problems such as CIFAR-10 and ImageNet. Our technique applies to both VGG and
Residual network architectures, with significantly better accuracy than the
state-of-the-art. Finally, we present analysis of the sparse event-driven
computations to demonstrate reduced hardware overhead when operating in the
spiking domain
Spiking-YOLO: Spiking Neural Network for Energy-Efficient Object Detection
Over the past decade, deep neural networks (DNNs) have demonstrated
remarkable performance in a variety of applications. As we try to solve more
advanced problems, increasing demands for computing and power resources has
become inevitable. Spiking neural networks (SNNs) have attracted widespread
interest as the third-generation of neural networks due to their event-driven
and low-powered nature. SNNs, however, are difficult to train, mainly owing to
their complex dynamics of neurons and non-differentiable spike operations.
Furthermore, their applications have been limited to relatively simple tasks
such as image classification. In this study, we investigate the performance
degradation of SNNs in a more challenging regression problem (i.e., object
detection). Through our in-depth analysis, we introduce two novel methods:
channel-wise normalization and signed neuron with imbalanced threshold, both of
which provide fast and accurate information transmission for deep SNNs.
Consequently, we present a first spiked-based object detection model, called
Spiking-YOLO. Our experiments show that Spiking-YOLO achieves remarkable
results that are comparable (up to 98%) to those of Tiny YOLO on non-trivial
datasets, PASCAL VOC and MS COCO. Furthermore, Spiking-YOLO on a neuromorphic
chip consumes approximately 280 times less energy than Tiny YOLO and converges
2.3 to 4 times faster than previous SNN conversion methods.Comment: Accepted to AAAI 202
MSS-DepthNet: Depth Prediction with Multi-Step Spiking Neural Network
Event cameras are considered to have great potential for computer vision and
robotics applications because of their high temporal resolution and low power
consumption characteristics. However, the event stream output from event
cameras has asynchronous, sparse characteristics that existing computer vision
algorithms cannot handle. Spiking neural network is a novel event-based
computational paradigm that is considered to be well suited for processing
event camera tasks. However, direct training of deep SNNs suffers from
degradation problems. This work addresses these problems by proposing a spiking
neural network architecture with a novel residual block designed and
multi-dimension attention modules combined, focusing on the problem of depth
prediction. In addition, a novel event stream representation method is
explicitly proposed for SNNs. This model outperforms previous ANN networks of
the same size on the MVSEC dataset and shows great computational efficiency
- …