72 research outputs found
Empirical study on the efficiency of Spiking Neural Networks with axonal delays, and algorithm-hardware benchmarking
The role of axonal synaptic delays in the efficacy and performance of
artificial neural networks has been largely unexplored. In step-based
analog-valued neural network models (ANNs), the concept is almost absent. In
their spiking neuroscience-inspired counterparts, there is hardly a systematic
account of their effects on model performance in terms of accuracy and number
of synaptic operations.This paper proposes a methodology for accounting for
axonal delays in the training loop of deep Spiking Neural Networks (SNNs),
intending to efficiently solve machine learning tasks on data with rich
temporal dependencies. We then conduct an empirical study of the effects of
axonal delays on model performance during inference for the Adding task, a
benchmark for sequential regression, and for the Spiking Heidelberg Digits
dataset (SHD), commonly used for evaluating event-driven models. Quantitative
results on the SHD show that SNNs incorporating axonal delays instead of
explicit recurrent synapses achieve state-of-the-art, over 90% test accuracy
while needing less than half trainable synapses. Additionally, we estimate the
required memory in terms of total parameters and energy consumption of
accomodating such delay-trained models on a modern neuromorphic accelerator.
These estimations are based on the number of synaptic operations and the
reference GF-22nm FDX CMOS technology. As a result, we demonstrate that a
reduced parameterization, which incorporates axonal delays, leads to
approximately 90% energy and memory reduction in digital hardware
implementations for a similar performance in the aforementioned task
Open the box of digital neuromorphic processor: Towards effective algorithm-hardware co-design
Sparse and event-driven spiking neural network (SNN) algorithms are the ideal
candidate solution for energy-efficient edge computing. Yet, with the growing
complexity of SNN algorithms, it isn't easy to properly benchmark and optimize
their computational cost without hardware in the loop. Although digital
neuromorphic processors have been widely adopted to benchmark SNN algorithms,
their black-box nature is problematic for algorithm-hardware co-optimization.
In this work, we open the black box of the digital neuromorphic processor for
algorithm designers by presenting the neuron processing instruction set and
detailed energy consumption of the SENeCA neuromorphic architecture. For
convenient benchmarking and optimization, we provide the energy cost of the
essential neuromorphic components in SENeCA, including neuron models and
learning rules. Moreover, we exploit the SENeCA's hierarchical memory and
exhibit an advantage over existing neuromorphic processors. We show the energy
efficiency of SNN algorithms for video processing and online learning, and
demonstrate the potential of our work for optimizing algorithm designs.
Overall, we present a practical approach to enable algorithm designers to
accurately benchmark SNN algorithms and pave the way towards effective
algorithm-hardware co-design
Hardware-aware training of models with synaptic delays for digital event-driven neuromorphic processors
Configurable synaptic delays are a basic feature in many neuromorphic neural network hardware accelerators. However, they have been rarely used in model implementations, despite their promising impact on performance and efficiency in tasks that exhibit complex (temporal) dynamics, as it has been unclear how to optimize them. In this work, we propose a framework to train and deploy, in digital neuromorphic hardware, highly performing spiking neural network models (SNNs) where apart from the synaptic weights, the per-synapse delays are also co-optimized. Leveraging spike-based back-propagation-through-time, the training accounts for both platform constraints, such as synaptic weight precision and the total number of parameters per core, as a function of the network size. In addition, a delay pruning technique is used to reduce memory footprint with a low cost in performance. We evaluate trained models in two neuromorphic digital hardware platforms: Intel Loihi and Imec Seneca. Loihi offers synaptic delay support using the so-called Ring-Buffer hardware structure. Seneca does not provide native hardware support for synaptic delays. A second contribution of this paper is therefore a novel area- and memory-efficient hardware structure for acceleration of synaptic delays, which we have integrated in Seneca. The evaluated benchmark involves several models for solving the SHD (Spiking Heidelberg Digits) classification task, where minimal accuracy degradation during the transition from software to hardware is demonstrated. To our knowledge, this is the first work showcasing how to train and deploy hardware-aware models parameterized with synaptic delays, on multicore neuromorphic hardware accelerators
Optimizing event-based neural networks on digital neuromorphic architecture: a comprehensive design space exploration
Neuromorphic processors promise low-latency and energy-efficient processing by adopting novel brain-inspired design methodologies. Yet, current neuromorphic solutions still struggle to rival conventional deep learning accelerators' performance and area efficiency in practical applications. Event-driven data-flow processing and near/in-memory computing are the two dominant design trends of neuromorphic processors. However, there remain challenges in reducing the overhead of event-driven processing and increasing the mapping efficiency of near/in-memory computing, which directly impacts the performance and area efficiency. In this work, we discuss these challenges and present our exploration of optimizing event-based neural network inference on SENECA, a scalable and flexible neuromorphic architecture. To address the overhead of event-driven processing, we perform comprehensive design space exploration and propose spike-grouping to reduce the total energy and latency. Furthermore, we introduce the event-driven depth-first convolution to increase area efficiency and latency in convolutional neural networks (CNNs) on the neuromorphic processor. We benchmarked our optimized solution on keyword spotting, sensor fusion, digit recognition and high resolution object detection tasks. Compared with other state-of-the-art large-scale neuromorphic processors, our proposed optimizations result in a 6× to 300× improvement in energy efficiency, a 3× to 15× improvement in latency, and a 3× to 100× improvement in area efficiency. Our optimizations for event-based neural networks can be potentially generalized to a wide range of event-based neuromorphic processors
NeuroBench:Advancing Neuromorphic Computing through Collaborative, Fair and Representative Benchmarking
The field of neuromorphic computing holds great promise in terms of advancing computing efficiency and capabilities by following brain-inspired principles. However, the rich diversity of techniques employed in neuromorphic research has resulted in a lack of clear standards for benchmarking, hindering effective evaluation of the advantages and strengths of neuromorphic methods compared to traditional deep-learning-based methods. This paper presents a collaborative effort, bringing together members from academia and the industry, to define benchmarks for neuromorphic computing: NeuroBench. The goals of NeuroBench are to be a collaborative, fair, and representative benchmark suite developed by the community, for the community. In this paper, we discuss the challenges associated with benchmarking neuromorphic solutions, and outline the key features of NeuroBench. We believe that NeuroBench will be a significant step towards defining standards that can unify the goals of neuromorphic computing and drive its technological progress. Please visit neurobench.ai for the latest updates on the benchmark tasks and metrics
NeuroBench: Advancing Neuromorphic Computing through Collaborative, Fair and Representative Benchmarking
The field of neuromorphic computing holds great promise in terms of advancing
computing efficiency and capabilities by following brain-inspired principles.
However, the rich diversity of techniques employed in neuromorphic research has
resulted in a lack of clear standards for benchmarking, hindering effective
evaluation of the advantages and strengths of neuromorphic methods compared to
traditional deep-learning-based methods. This paper presents a collaborative
effort, bringing together members from academia and the industry, to define
benchmarks for neuromorphic computing: NeuroBench. The goals of NeuroBench are
to be a collaborative, fair, and representative benchmark suite developed by
the community, for the community. In this paper, we discuss the challenges
associated with benchmarking neuromorphic solutions, and outline the key
features of NeuroBench. We believe that NeuroBench will be a significant step
towards defining standards that can unify the goals of neuromorphic computing
and drive its technological progress. Please visit neurobench.ai for the latest
updates on the benchmark tasks and metrics
NeuroBench:A Framework for Benchmarking Neuromorphic Computing Algorithms and Systems
Neuromorphic computing shows promise for advancing computing efficiency and capabilities of AI applications using brain-inspired principles. However, the neuromorphic research field currently lacks standardized benchmarks, making it difficult to accurately measure technological advancements, compare performance with conventional methods, and identify promising future research directions. Prior neuromorphic computing benchmark efforts have not seen widespread adoption due to a lack of inclusive, actionable, and iterative benchmark design and guidelines. To address these shortcomings, we present NeuroBench: a benchmark framework for neuromorphic computing algorithms and systems. NeuroBench is a collaboratively-designed effort from an open community of nearly 100 co-authors across over 50 institutions in industry and academia, aiming to provide a representative structure for standardizing the evaluation of neuromorphic approaches. The NeuroBench framework introduces a common set of tools and systematic methodology for inclusive benchmark measurement, delivering an objective reference framework for quantifying neuromorphic approaches in both hardware-independent (algorithm track) and hardware-dependent (system track) settings. In this article, we present initial performance baselines across various model architectures on the algorithm track and outline the system track benchmark tasks and guidelines. NeuroBench is intended to continually expand its benchmarks and features to foster and track the progress made by the research community
Recommended from our members
Cyber Insurance
This volume covers a wide spectrum of issues relating to economic and political development enabled by information and communication technology (ICT)
- …