30 research outputs found
Recommended from our members
CORtEX 2022 Invited Speaker 1: The GeNN ecosystem for GPU accelerated spiking neural network simulations
The GPU enhanced neuronal networks (GeNN, https://github.com/genn-team/genn) framework is a collection of software aimed at simplifying the simulation of spiking neural networks on GPU accelerators. At its core, GeNN is a meta-compiler that translates model descriptions for spiking neural networks (SNNs) into efficient code for a computational back-end. Currently, GeNN supports CUDA, OpenCL and single-threaded CPU backends. GeNN was designed for maximal user flexibility and so can be employed in computational Neuroscience and machine learning contexts alike. In this talk, I will give an overview of the GeNN ecosystem, discuss some innovations that make important contributions to GeNN's performance, and present benchmark results from Computational Neuroscience and machine learning applications.</p
Recommended from our members
Easy and efficient spike-based Machine Learning with mlGeNN
Intuitive and easy to use application programming interfaces such as Keras have played a large part in the rapid acceleration of machine learning with artificial neural networks. Building on our recent works translating ANNs to SNNs and directly training classifiers with e-prop, we here present the mlGeNN interface as an easy way to define, train and test spiking neural networks on our efficient GPU based GeNN framework. We illustrate the use of mlGeNN by investigating the performance of a number of one and two layer recurrent spiking neural networks trained to recognise hand gestures from the DVS gesture dataset with the e-prop learning rule. We find that not only is mlGeNN vastly more convenient to use than the lower level PyGeNN interface, the new freedom to effortlessly and rapidly prototype different network architectures also gave us an unprecedented overview over how e-prop compares to other recently published results on the DVS gesture dataset across architectural details
Recommended from our members
The NeuroBench framework for benchmarking neuromorphic computing algorithms and systems
In recent years, the rapid growth of artificial intelligence (AI) and machine learning (ML) has resulted in increasinglycomplex and large models in pursuit of higher accuracy and range of use cases. The substantial growth rate of model computation exceeds efficiency gains realized through Moore and Dennard technology scaling, indicating a looming limit to continued advancements with existing techniques. This issue is compounded by the open challenges of adapting such methods for resource-constrained edge devices (tinyML) in order to enable pervasive and decentralized intelligence through the Internet of Things (IoT). As such, the urgency for exploring new resource-efficient and scalable computing architectures has intensified.</p
Recommended from our members
Dataset for paper "Larger GPU-accelerated brain simulations with procedural connectivity"
Dataset for paper published in Nat Comput Sci Feb 2021Dataset contains raw spiking data from full-scale multi-area model simulation run using GeNN 4.3.3. Each tar.gz archive contains the configuration files for each simulation and, in the recording directory, binary numpy files contains the spike trains from each population.Archives with filenames starting with 82d3c0816b0ad1c07ea27e61eb981f7a contain spike data from three 10.5 second "ground state" simulations of the model's "ground state" (chi=1.0)Archives with filenames starting with b03fdaa1fd47a0e4a10483bc3901f1e5 contain spike data from three 100.5 second "ground state" simulations of the model's "resting state" (chi=1.9)Abstract"Simulations are an important tool for investigating brain function but large models are needed to faithfully reproduce the statistics and dynamics of brain activity.Simulating large spiking neural network models has, until now, needed so much memory for storing synaptic connections that it required high performance computer systems. Here, we present an alternative simulation method we call `procedural connectivity' where connectivity and synaptic weights are generated `on the fly' instead of stored and retrieved from memory. This method is particularly well-suited for use on Graphical Processing Units (GPUs) - which are a common fixture in many workstations. Extending our GeNN software with procedural connectivity and a second technical innovation for GPU code generation, we can simulate a recent model of the Macaque visual cortex with 4.136 neurons and 24.29 synapses on a single GPU - a significant step forward in making large-scale brain modelling accessible to more researchers."FundingBrains on Board grant
number EP/P006094/1</div
Recommended from our members
EvDownsampling dataset
This dataset is used in the publication "EvDownsampling: A Robust Method For Downsampling Event Camera Data", ECCV Workshop on Neuromorphic Vision: Advantages and Applications of Event Cameras [29/09/2024].This dataset contains event streams of highly dynamic real-world scenes collected using two DVS cameras of different spatial resolutions – a DVXplorer (640×480 px) and a Davis346 (346×260 px). Both cameras simultaneously recorded each scene with negligible parallax error. The dataset is provided to test event-based spatio-temporal downsampling techniques through comparing downsampled higher-resolution recordings with matching lower-resolution recordings, as explained in our publication above.There are four classes {class_folder} of scenes:Traffic: natural lighting. Bus and car moving across camera visual field with several pedestrians. 6 seconds long.HandGestures: fluorescent lighting. Person either waving their hand, waving their arms or doing jumping jacks. 12-15 seconds long.Corridor: fluorescent lighting. Moving through corridors. One corridor scene (Pevensey) has a carpet which provides texture, while the other scene (Arundel) does not have a carpet. 18-24 seconds long.Cars: natural lighting. Car moving across camera visual field with few pedestrians. 3-5 seconds long.Each dataset/{class_folder} contains two folders consisting of:Videos of the scene recordings captured by both DVS cameras placed side-by-side (.mp4)Raw event data information in the form of (x, y, timestamp, polarity) in AEDAT 4 format (.aedat4).The script dualCam_dvRead.py can be used to convert the .aedat4 files into a NumPy format and to generate frame reconstructions. The syntax to call the script from the command-line is:python3 dualCam_dvRead.py --data_folder {class_folder} --input {scene_recording} --publisher_rate {publisher_rate}class_folder is the class of the scene recording e.g. corridorscene_recording is the specific recording in that class e.g. Pevenseypublisher_rate determines frame rate of images published (in fps) e.g. 1000.More information is available at: https://github.com/anindyaghosh/EvDownsampling.The conference website is: https://sites.google.com/view/nevi2024/home-page.</p
Recommended from our members
FeNN: A RISC-V vector processor for Spiking Neural Network acceleration
No description supplied</p
Recommended from our members
Dataset for paper "mlGeNN: Accelerating SNN inference using GPU-Enabled Neural Networks"
Dataset for paper accepted in IOP Neuromorphic Computing and Engineering March 2022Dataset contains trained weights from TensorFlow 2.4.0 for the following models:- vgg16_imagenet_tf_weights.h5 - VGG-16 model trained on ImageNet ILSVRC dataset - vgg16_tf_weights.h5 - VGG-16 model trained on CIFAR-10 dataset- resnet20_cifar10_tf_weights.h5 - ResNet-20 model trained on CIFAR-10 dataset- resnet34_imagenet_tf_weights.h5 - ResNet-34 model trained on ImageNet ILSVRCAbstract"In this paper we present mlGeNN - a Python library for the conversion of artificial neural networks (ANNs) specified in Keras to spiking neural networks (SNNs). SNNs are simulated using GeNN with extensions to efficiently support convolutional connectivity and batching. We evaluate converted SNNs on CIFAR-10 and ImageNet classification tasks and compare the performance to both the original ANNs and other SNN simulators. We find that performing inference using a VGG-16 model, trained on the CIFAR-10 dataset, is 2.5x faster than BindsNet and, when using a ResNet-20 model trained on CIFAR-10 with FewSpike ANN to SNN conversion, mlGeNN is only a little over 2x slower than TensorFlow."FundingBrains on Board grant number EP/P006094/1ActiveAI grant number EP/S030964/1Unlocking spiking neural networks for machine learning research grant number EP/V052241/1European Union's Horizon 2020 research and innovation program under Grant Agreement 945539</div
Recommended from our members
Introduction to the proceedings of the CNS*2023Â meeting
As the president of the Organization for Computational Neurosciences and someone who has attended the annual CNS conference for over two decades, I am particularly pleased to introduce the publication of abstracts from the CNS*2023 conference in Leipzig in the Journal of Computational Neuroscience. The Journal of Computational Neuroscience has a long-standing association with OCNS, having been co-founded by Jim Bower who also played a seminal role in establishing our annual CNS meeting. The status of the Journal of Computational Neuroscience as the official OCNS publication is reflected by reduced personal journal subscription rates for OCNS members. Like the CNS meeting, the Journal of Computational Neuroscience encourages approaches that combine theoretical, computational, and experimental work in the neurosciences, and it provides a natural home for the publication of our meeting abstracts.</p
Recommended from our members
Insect-inspired spatio-temporal downsampling of event-based Input
As vision sensors for autonomous systems, event based cameras provide numerous benefits over conventional cameras including higher dynamic range and temporal resolution as well as lower bandwidth and power requirements. However, while downsampling is regularly used in standard computer vision, there are no reliable techniques to do this for event data, resulting in a bottleneck for event-based computer vision systems. Here we extend our previous work, explain the challenges that need to be overcome by any effective event-based downsampling algorithm and present a novel biologically-inspired process that can adeptly downsample event streams by factors of up to 16 times compared to the original resolution. We show that our downsampled event streams achieve high fidelity with a hypothetical low-resolution event camera and improve classification performance on highly downsampled versions of the DVS gesture dataset. Furthermore, we demonstrate that compared to a naïve event-based downsampling, our approach massively reduces the number of spikes that downstream neuromorphic processors have to handle.</p
Recommended from our members
Loss shaping enhances exact gradient learning with Eventprop in Spiking Neural Networks
Event-based machine learning promises more energy-efficient AI on future neuromorphic hardware.
Here, we investigate how the recently discovered Eventprop algorithm for gradient descent on exact gradients in spiking neural networks can be scaled up to challenging keyword recognition benchmarks.
We implemented Eventprop in the GPU-enhanced Neural Networks framework and used it for training recurrent spiking neural networks on the Spiking Heidelberg Digits and Spiking Speech Commands datasets.
We found that learning depended strongly on the loss function and extended Eventprop to a wider class of loss functions to enable effective training.
We then tested a large number of data augmentations and regularisations as well as exploring different network structures; and heterogeneous and trainable timescales. We found that when combined with two specific augmentations, the right regularisation and a delay line input, Eventprop networks with one recurrent layer achieved state-of-the-art performance on Spiking Heidelberg Digits and good accuracy on Spiking Speech Commands.
In comparison to a leading surrogate-gradient-based SNN training method, our GeNN Eventprop implementation is 3X faster and uses 4X less memory.
This work is a significant step towards a low-power neuromorphic alternative to current machine learning paradigms.</p