12 research outputs found
Recommended from our members
Easy and efficient spike-based Machine Learning with mlGeNN
Intuitive and easy to use application programming interfaces such as Keras have played a large part in the rapid acceleration of machine learning with artificial neural networks. Building on our recent works translating ANNs to SNNs and directly training classifiers with e-prop, we here present the mlGeNN interface as an easy way to define, train and test spiking neural networks on our efficient GPU based GeNN framework. We illustrate the use of mlGeNN by investigating the performance of a number of one and two layer recurrent spiking neural networks trained to recognise hand gestures from the DVS gesture dataset with the e-prop learning rule. We find that not only is mlGeNN vastly more convenient to use than the lower level PyGeNN interface, the new freedom to effortlessly and rapidly prototype different network architectures also gave us an unprecedented overview over how e-prop compares to other recently published results on the DVS gesture dataset across architectural details
Recommended from our members
Dataset for paper "Larger GPU-accelerated brain simulations with procedural connectivity"
Dataset for paper published in Nat Comput Sci Feb 2021Dataset contains raw spiking data from full-scale multi-area model simulation run using GeNN 4.3.3. Each tar.gz archive contains the configuration files for each simulation and, in the recording directory, binary numpy files contains the spike trains from each population.Archives with filenames starting with 82d3c0816b0ad1c07ea27e61eb981f7a contain spike data from three 10.5 second "ground state" simulations of the model's "ground state" (chi=1.0)Archives with filenames starting with b03fdaa1fd47a0e4a10483bc3901f1e5 contain spike data from three 100.5 second "ground state" simulations of the model's "resting state" (chi=1.9)Abstract"Simulations are an important tool for investigating brain function but large models are needed to faithfully reproduce the statistics and dynamics of brain activity.Simulating large spiking neural network models has, until now, needed so much memory for storing synaptic connections that it required high performance computer systems. Here, we present an alternative simulation method we call `procedural connectivity' where connectivity and synaptic weights are generated `on the fly' instead of stored and retrieved from memory. This method is particularly well-suited for use on Graphical Processing Units (GPUs) - which are a common fixture in many workstations. Extending our GeNN software with procedural connectivity and a second technical innovation for GPU code generation, we can simulate a recent model of the Macaque visual cortex with 4.136 neurons and 24.29 synapses on a single GPU - a significant step forward in making large-scale brain modelling accessible to more researchers."FundingBrains on Board grant
number EP/P006094/1</div
Recommended from our members
Dataset for paper "mlGeNN: Accelerating SNN inference using GPU-Enabled Neural Networks"
Dataset for paper accepted in IOP Neuromorphic Computing and Engineering March 2022Dataset contains trained weights from TensorFlow 2.4.0 for the following models:- vgg16_imagenet_tf_weights.h5 - VGG-16 model trained on ImageNet ILSVRC dataset - vgg16_tf_weights.h5 - VGG-16 model trained on CIFAR-10 dataset- resnet20_cifar10_tf_weights.h5 - ResNet-20 model trained on CIFAR-10 dataset- resnet34_imagenet_tf_weights.h5 - ResNet-34 model trained on ImageNet ILSVRCAbstract"In this paper we present mlGeNN - a Python library for the conversion of artificial neural networks (ANNs) specified in Keras to spiking neural networks (SNNs). SNNs are simulated using GeNN with extensions to efficiently support convolutional connectivity and batching. We evaluate converted SNNs on CIFAR-10 and ImageNet classification tasks and compare the performance to both the original ANNs and other SNN simulators. We find that performing inference using a VGG-16 model, trained on the CIFAR-10 dataset, is 2.5x faster than BindsNet and, when using a ResNet-20 model trained on CIFAR-10 with FewSpike ANN to SNN conversion, mlGeNN is only a little over 2x slower than TensorFlow."FundingBrains on Board grant number EP/P006094/1ActiveAI grant number EP/S030964/1Unlocking spiking neural networks for machine learning research grant number EP/V052241/1European Union's Horizon 2020 research and innovation program under Grant Agreement 945539</div
Recommended from our members
Insect-inspired spatio-temporal downsampling of event-based Input
As vision sensors for autonomous systems, event based cameras provide numerous benefits over conventional cameras including higher dynamic range and temporal resolution as well as lower bandwidth and power requirements. However, while downsampling is regularly used in standard computer vision, there are no reliable techniques to do this for event data, resulting in a bottleneck for event-based computer vision systems. Here we extend our previous work, explain the challenges that need to be overcome by any effective event-based downsampling algorithm and present a novel biologically-inspired process that can adeptly downsample event streams by factors of up to 16 times compared to the original resolution. We show that our downsampled event streams achieve high fidelity with a hypothetical low-resolution event camera and improve classification performance on highly downsampled versions of the DVS gesture dataset. Furthermore, we demonstrate that compared to a naïve event-based downsampling, our approach massively reduces the number of spikes that downstream neuromorphic processors have to handle.</p
Recommended from our members
Posit and floating-point based Izhikevich neuron: A comparison of arithmetic
Reduced precision number formats have become increasingly popular in various fields of computational science, as they offer the potential to enhance energy efficiency, reduce silicon area, and improve processing speed. However, this is often at the expense of introducing arithmetic errors that can impact the accuracy of a system. The optimal balance must be struck, judiciously choosing a number format using as few bits as possible, while minimising accuracy loss.In this study, we examine one such format, posit arithmetic as a replacement for floating-point when conducting spiking neuron simulations, specifically using the Izhikevich neuron model. This model is capable of simulating complex neural firing behaviours, 20 of which were originally identified by Izhikevich and are used in this study. We compare the accuracy, spike count, and spike timing of the two arithmetic systems at different bit-depths against a 64-bit floating-point gold-standard. Additionally, we test a rescaled set of Izhikevich equations to mitigate against arithmetic errors by taking advantage of posit arithmetic’s tapered accuracy.Our findings indicate that there is no difference in performance between 32-bit posit, 32-bit floating-point, and our 64-bit reference for all but one of the tested firing types. However, at 16-bit, both arithmetic systems diverge from the 64-bit reference, albeit in different ways. For example, 16-bit posit demonstrates an 18× improvement in accumulated spike timing error over a 1000ms simulation compared to 16-bit floating-point when simulating regular (tonic) spiking. This finding holds particular importance given the prevalence of this particular firing type in specific regions of the brain. Furthermore, when we rescale the neuron equations, this error is eliminated altogether. Although current Posit Arithmetic Units are no smaller than Floating Point Units of the same bit-width, our results demonstrate that 64-bit floating-point can be replaced with 16-bit posit which could enable significant area savings in future systems.</p
Recommended from our members
UoS campus and Stanmer park outdoor navigational data
This dataset contains omnidirectional 1440✕1440 resolution images taken using a Kodak Pixpro SP360 camera paired with RTK GPS information obtained using a simple RTK2B - 4G NTRIP kit and fused yaw, pitch and roll data recorded from a BNO055 IMU. The data was collected using a 4 wheel ground robot (SuperDroid IG42-SB4-T) that was manually controlled by a human operator. The robot was driven 10 times along a route on the University of Sussex campus (shown in campus.png) and 10 times at the adjacent Stanmer Park (shown in stanmer.png). The first route is a mix of urban structures (university buildings), small patches of trees and paths populated by people and is approximately 700m long. The second route consists mostly of open fields and a narrow path through a forest and is approximately 600m long. The recordings took place at various days and times starting in May 2023, with the date and time indicated by the filename. For example ‘campus_route5_2023_11_22_102925.zip’ corresponds to the 5th route recorded on the Sussex campus on 22/11/2023 starting at 10:29:25 GMT. During the recordings the weather varied from clear skies and sunny days to overcast and low light conditions. Each recording consists of the .jpg files that make up the route, and a .csv file with the following columns:X, Y and Z coordinate (in mm) and zone representing location in UTM coordinates from GPSHeading, pitch and roll (in degrees) from IMU. In some early routes, the IMU failed and when this occurs these values are recorded as “NaN”.Filename of corresponding camera imageLatitude (in decimal degrees west) and Longitude (in decimal degrees north) and Altitude (in m) from GPSGPS quality (1=GPS, 2=DGNSS, 4=RTK Fixed and 5=RTK Float) horizontal dilation (in mm)Timestamp (in ms)</p
Recommended from our members
Stanmer Park outdoor navigational data
This dataset contains omnidirectional 1440✕1440 resolution images taken using a Kodak Pixpro SP360 camera paired with RTK GPS information obtained using a simple RTK2B - 4G NTRIP kit and fused yaw, pitch and roll data recorded from a BNO055 IMU. The data was collected using a 4 wheel ground robot that was manually controlled by a human operator. The robot was driven 15 times along a route at Stanmer Park (shown in map.png). The route consists mostly of open fields and a narrow path through a forest and is approximately 700m long. The recordings took place at various days and times starting in March 2021, with the date and time indicated by the filename. For example ‘20210420_135721.zip’ corresponds to a route driven on 20/03/2021 starting at 13:57:21 GMT. During the recordings the weather varied from clear skies and sunny days to overcast and low light conditions. Each recording consists of an mp4 video of the camera footage for the route, and a database_entries.csv file with the following columns:Timestamp of video frame (in ms)X, Y and Z coordinate (in mm) and zone representing location in UTM coordinates from GPSHeading, pitch and roll (in degrees) from IMU. In some early routes, the IMU failed and when this occurs these values are recorded as “NaN”.Speed and Steering angle commands being sent to robot at that timeGPS quality (1=GPS, 2=DGNSS, 4=RTK Fixed and 5=RTK Float)X, Y and Z coordinates (in mm) fitted to a degree one polynomial to smooth out GPS noiseHeading (in degrees) derived from smoothed GPS coordinatesIMU heading (in degrees) with discontinuities resulting from IMU issues fixedFor completeness, each folder also contains a database_entries_original.csv containing the data before pre-processing. The pre-processing is documented in more detail in pre_processing_notes.pdf.</p
Efficient SpiNNaker simulation of a heteroassociative memory using the Neural Engineering Framework
The biological brain is a highly plastic system within which the efficacy and structure of synaptic connections are constantly changing in response to internal and external stimuli. While numerous models of this plastic behavior exist at various levels of abstraction, how these mechanisms allow the brain to learn meaningful values is unclear. The Neural Engineering Framework (NEF) is a hypothesis about how large-scale neural systems represent values using populations of spiking neurons, and transform them using functions implemented by the synaptic weights between populations. By exploiting the fact that these connection weight matrices are factorable, we have recently shown that static NEF models can be simulated very efficiently using the SpiNNaker neuromorphic architecture. In this paper, we demonstrate how this approach can be extended to efficiently support both supervised and unsupervised learning rules designed to operate on these factored matrices. We then present a heteroassociative memory architecture built using these learning rules and prove that it is capable of learning a human-scale semantic network. Finally we demonstrate a 100 000 neuron version of this architecture running on the SpiNNaker simulator with a speed-up exceeding 150x when compared to the Nengo reference simulator
Recommended from our members
Editorial: Neuroscience, computing, performance, and benchmarks: why it matters to neuroscience how fast we can compute
No description supplied</p
Hoverfly (Eristalis tenax) descending neurons respond to pursuits of artificial targets
Many animals use motion vision information to control dynamic behaviors. Predatory
animals, for example, show an exquisite ability to detect rapidly moving prey followed
by pursuit and capture. Such target detection is not only used by predators but can
also play an important role in conspecific interactions. Male hoverflies (Eristalis tenax),
for example, vigorously defend their territories against conspecific intruders. Visual
target detection is believed to be subserved by specialized target tuned neurons that
are found in a range of species, including vertebrates and arthropods. However, how
these target-tuned neurons respond to actual pursuit trajectories is currently not well
understood. To redress this, we recorded extracellularly from target selective
descending neurons (TSDNs) in male Eristalis tenax hoverflies. We show that they
have dorso-frontal receptive fields, with a preferred direction up and away from the
visual midline, which cluster into TSDNLeft and TSDNRight. We reconstructed visual
flow-fields as experienced during pursuits of artificial targets (black beads). We
recorded TSDN responses to six reconstructed pursuits and found that each neuron
responded consistently at remarkably specific time points, but that these time points
differed between neurons. We found that the observed spike probability was correlated
with the spike probability predicted from each neuron’s receptive field and size tuning.
Interestingly, however, the overall response rate was low, with individual neurons
responding to only a small part of each reconstructed pursuit. In contrast, the
TSDNLeft and TSDNRight populations responded to substantially larger proportions of
the pursuits, but with lower probability. This large variation between neurons could be
useful if different neurons control different parts of the behavioral output.</p