9 research outputs found
A Supervised STDP-based Training Algorithm for Living Neural Networks
Neural networks have shown great potential in many applications like speech
recognition, drug discovery, image classification, and object detection. Neural
network models are inspired by biological neural networks, but they are
optimized to perform machine learning tasks on digital computers. The proposed
work explores the possibilities of using living neural networks in vitro as
basic computational elements for machine learning applications. A new
supervised STDP-based learning algorithm is proposed in this work, which
considers neuron engineering constrains. A 74.7% accuracy is achieved on the
MNIST benchmark for handwritten digit recognition.Comment: 5 pages, 3 figures, Accepted by ICASSP 201
Mantis: Enabling Energy-Efficient Autonomous Mobile Agents with Spiking Neural Networks
Autonomous mobile agents such as unmanned aerial vehicles (UAVs) and mobile
robots have shown huge potential for improving human productivity. These mobile
agents require low power/energy consumption to have a long lifespan since they
are usually powered by batteries. These agents also need to adapt to
changing/dynamic environments, especially when deployed in far or dangerous
locations, thus requiring efficient online learning capabilities. These
requirements can be fulfilled by employing Spiking Neural Networks (SNNs) since
SNNs offer low power/energy consumption due to sparse computations and
efficient online learning due to bio-inspired learning mechanisms. However, a
methodology is still required to employ appropriate SNN models on autonomous
mobile agents. Towards this, we propose a Mantis methodology to systematically
employ SNNs on autonomous mobile agents to enable energy-efficient processing
and adaptive capabilities in dynamic environments. The key ideas of our Mantis
include the optimization of SNN operations, the employment of a bio-plausible
online learning mechanism, and the SNN model selection. The experimental
results demonstrate that our methodology maintains high accuracy with a
significantly smaller memory footprint and energy consumption (i.e., 3.32x
memory reduction and 2.9x energy saving for an SNN model with 8-bit weights)
compared to the baseline network with 32-bit weights. In this manner, our
Mantis enables the employment of SNNs for resource- and energy-constrained
mobile agents.Comment: To appear at the 2023 International Conference on Automation,
Robotics and Applications (ICARA), February 2023, Abu Dhabi, UAE. arXiv admin
note: text overlap with arXiv:2206.0865
SpikeDyn: A Framework for Energy-Efficient Spiking Neural Networks with Continual and Unsupervised Learning Capabilities in Dynamic Environments
Spiking Neural Networks (SNNs) bear the potential of efficient unsupervised
and continual learning capabilities because of their biological plausibility,
but their complexity still poses a serious research challenge to enable their
energy-efficient design for resource-constrained scenarios (like embedded
systems, IoT-Edge, etc.). We propose SpikeDyn, a comprehensive framework for
energy-efficient SNNs with continual and unsupervised learning capabilities in
dynamic environments, for both the training and inference phases. It is
achieved through the following multiple diverse mechanisms: 1) reduction of
neuronal operations, by replacing the inhibitory neurons with direct lateral
inhibitions; 2) a memory- and energy-constrained SNN model search algorithm
that employs analytical models to estimate the memory footprint and energy
consumption of different candidate SNN models and selects a Pareto-optimal SNN
model; and 3) a lightweight continual and unsupervised learning algorithm that
employs adaptive learning rates, adaptive membrane threshold potential, weight
decay, and reduction of spurious updates. Our experimental results show that,
for a network with 400 excitatory neurons, our SpikeDyn reduces the energy
consumption on average by 51% for training and by 37% for inference, as
compared to the state-of-the-art. Due to the improved learning algorithm,
SpikeDyn provides on avg. 21% accuracy improvement over the state-of-the-art,
for classifying the most recently learned task, and by 8% on average for the
previously learned tasks.Comment: To appear at the 58th IEEE/ACM Design Automation Conference (DAC),
December 2021, San Francisco, CA, US
Neuromorphic Engineering Editors' Pick 2021
This collection showcases well-received spontaneous articles from the past couple of years, which have been specially handpicked by our Chief Editors, Profs. André van Schaik and Bernabé Linares-Barranco. The work presented here highlights the broad diversity of research performed across the section and aims to put a spotlight on the main areas of interest. All research presented here displays strong advances in theory, experiment, and methodology with applications to compelling problems. This collection aims to further support Frontiers’ strong community by recognizing highly deserving authors
Bio-inspired learning and hardware acceleration with emerging memories
Machine Learning has permeated many aspects of engineering, ranging from the Internet of Things (IoT) applications to big data analytics. While computing resources available to implement these algorithms have become more powerful, both in terms of the complexity of problems that can be solved and the overall computing speed, the huge energy costs involved remains a significant challenge. The human brain, which has evolved over millions of years, is widely accepted as the most efficient control and cognitive processing platform. Neuro-biological studies have established that information processing in the human brain relies on impulse like signals emitted by neurons called action potentials. Motivated by these facts, the Spiking Neural Networks (SNNs), which are a bio-plausible version of neural networks have been proposed as an alternative computing paradigm where the timing of spikes generated by artificial neurons is central to its learning and inference capabilities. This dissertation demonstrates the computational power of the SNNs using conventional CMOS and emerging nanoscale hardware platforms.
The first half of this dissertation presents an SNN architecture which is trained using a supervised spike-based learning algorithm for the handwritten digit classification problem. This network achieves an accuracy of 98.17% on the MNIST test data-set, with about 4X fewer parameters compared to the state-of-the-art neural networks achieving over 99% accuracy. In addition, a scheme for parallelizing and speeding up the SNN simulation on a GPU platform is presented. The second half of this dissertation presents an optimal hardware design for accelerating SNN inference and training with SRAM (Static Random Access Memory) and nanoscale non-volatile memory (NVM) crossbar arrays. Three prominent NVM devices are studied for realizing hardware accelerators for SNNs: Phase Change Memory (PCM), Spin Transfer Torque RAM (STT-RAM) and Resistive RAM (RRAM). The analysis shows that a spike-based inference engine with crossbar arrays of STT-RAM bit-cells is 2X and 5X more efficient compared to PCM and RRAM memories, respectively. Furthermore, the STT-RAM design has nearly 6X higher throughput per unit Watt per unit area than that of an equivalent SRAM-based (Static Random Access Memory) design. A hardware accelerator with on-chip learning on an STT-RAM memory array is also designed, requiring bits of floating-point synaptic weight precision to reach the baseline SNN algorithmic performance on the MNIST dataset. The complete design with STT-RAM crossbar array achieves nearly 20X higher throughput per unit Watt per unit mm^2 than an equivalent design with SRAM memory.
In summary, this work demonstrates the potential of spike-based neuromorphic computing algorithms and its efficient realization in hardware based on conventional CMOS as well as emerging technologies. The schemes presented here can be further extended to design spike-based systems that can be ubiquitously deployed for energy and memory constrained edge computing applications
Using MapReduce Streaming for Distributed Life Simulation on the Cloud
Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp