23 research outputs found
Spiking Neural Networks for Inference and Learning: A Memristor-based Design Perspective
On metrics of density and power efficiency, neuromorphic technologies have
the potential to surpass mainstream computing technologies in tasks where
real-time functionality, adaptability, and autonomy are essential. While
algorithmic advances in neuromorphic computing are proceeding successfully, the
potential of memristors to improve neuromorphic computing have not yet born
fruit, primarily because they are often used as a drop-in replacement to
conventional memory. However, interdisciplinary approaches anchored in machine
learning theory suggest that multifactor plasticity rules matching neural and
synaptic dynamics to the device capabilities can take better advantage of
memristor dynamics and its stochasticity. Furthermore, such plasticity rules
generally show much higher performance than that of classical Spike Time
Dependent Plasticity (STDP) rules. This chapter reviews the recent development
in learning with spiking neural network models and their possible
implementation with memristor-based hardware
A Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback
Reward-modulated spike-timing-dependent plasticity (STDP) has recently emerged as
a candidate for a learning rule that could explain how behaviorally relevant
adaptive changes in complex networks of spiking neurons could be achieved in a
self-organizing manner through local synaptic plasticity. However, the
capabilities and limitations of this learning rule could so far only be tested
through computer simulations. This article provides tools for an analytic
treatment of reward-modulated STDP, which allows us to predict under which
conditions reward-modulated STDP will achieve a desired learning effect. These
analytical results imply that neurons can learn through reward-modulated STDP to
classify not only spatial but also temporal firing patterns of presynaptic
neurons. They also can learn to respond to specific presynaptic firing patterns
with particular spike patterns. Finally, the resulting learning theory predicts
that even difficult credit-assignment problems, where it is very hard to tell
which synaptic weights should be modified in order to increase the global reward
for the system, can be solved in a self-organizing manner through
reward-modulated STDP. This yields an explanation for a fundamental experimental
result on biofeedback in monkeys by Fetz and Baker. In this experiment monkeys
were rewarded for increasing the firing rate of a particular neuron in the
cortex and were able to solve this extremely difficult credit assignment
problem. Our model for this experiment relies on a combination of
reward-modulated STDP with variable spontaneous firing activity. Hence it also
provides a possible functional explanation for trial-to-trial variability, which
is characteristic for cortical networks of neurons but has no analogue in
currently existing artificial computing systems. In addition our model
demonstrates that reward-modulated STDP can be applied to all synapses in a
large recurrent neural network without endangering the stability of the network
dynamics
PC-SNN: Supervised Learning with Local Hebbian Synaptic Plasticity based on Predictive Coding in Spiking Neural Networks
Deemed as the third generation of neural networks, the event-driven Spiking
Neural Networks(SNNs) combined with bio-plausible local learning rules make it
promising to build low-power, neuromorphic hardware for SNNs. However, because
of the non-linearity and discrete property of spiking neural networks, the
training of SNN remains difficult and is still under discussion. Originating
from gradient descent, backprop has achieved stunning success in multi-layer
SNNs. Nevertheless, it is assumed to lack biological plausibility, while
consuming relatively high computational resources. In this paper, we propose a
novel learning algorithm inspired by predictive coding theory and show that it
can perform supervised learning fully autonomously and successfully as the
backprop, utilizing only local Hebbian plasticity. Furthermore, this method
achieves a favorable performance compared to the state-of-the-art multi-layer
SNNs: test accuracy of 99.25% for the Caltech Face/Motorbike dataset, 84.25%
for the ETH-80 dataset, 98.1% for the MNIST dataset and 98.5% for the
neuromorphic dataset: N-MNIST. Furthermore, our work provides a new perspective
on how supervised learning algorithms are directly implemented in spiking
neural circuitry, which may give some new insights into neuromorphological
calculation in neuroscience.Comment: 15 pages, 11fig
Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network
The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically
Learning Autonomous Flight Controllers with Spiking Neural Networks
The ability of a robot to adapt in-mission to achieve an assigned goal is highly desirable. This thesis project places an emphasis on employing learning-based intelligent control methodologies to the development and implementation of an autonomous unmanned aerial vehicle (UAV). Flight control is carried out by evolving spiking neural networks (SNNs) with Hebbian plasticity. The proposed implementation is capable of learning and self-adaptation to model variations and uncertainties when the controller learned in simulation is deployed on a physical platform.
Controller development for small multicopters often relies on simulations as an intermediate step, providing cheap, parallelisable, observable and reproducible optimisation with no risk of damage to hardware. Although model-based approaches have been widely utilised in the process of development, loss of performance can be observed on the target platform due to simplification of system dynamics in simulation (e.g., aerodynamics, servo dynamics, sensor uncertainties). Ignorance of these effects in simulation can significantly deteriorate performance when the controller is deployed. Previous approaches often require mathematical or simulation models with a high level of accuracy which can be difficult to obtain. This thesis, on the other hand, attempts to cross the reality gap between a low-fidelity simulation and the real platform. This is done using synaptic plasticity to adapt the SNN controller evolved in simulation to the actual UAV dynamics.
The primary contribution of this work is the implementation of a procedural methodology for SNN control that integrates bioinspired learning mechanisms with artificial evolution, with an SNN library package (i.e. eSpinn) developed by the author. Distinct from existing SNN simulators that mainly focus on large-scale neuron interactions and learning mechanisms from a neuroscience perspective, the eSpinn library draws particular attention to embedded implementations on hardware that is applicable for problems in the robotic domain. This C++ software package is not only able to support simulations in the MATLAB and Python environment, allowing rapid prototyping and validation in simulation; but also capable of seamless transition between simulation and deployment on the embedded platforms.
This work implements a modified version of the NEAT neuroevolution algorithm and leverages the power of evolutionary computation to discover functional controller compositions and optimise plasticity mechanisms for online adaptation. With the eSpinn software package the development of spiking neurocontrollers for all degrees of freedom of the UAV is demonstrated in simulation. Plastic height control is carried out on a physical hexacopter platform. Through a set of experiments it is shown that the evolved plastic controller can maintain its functionality by self-adapting to model changes and uncertainties that take place after evolutionary training, and consequently exhibit better performance than its non-plastic counterpart
Dimensions of Timescales in Neuromorphic Computing Systems
This article is a public deliverable of the EU project "Memory technologies
with multi-scale time constants for neuromorphic architectures" (MeMScales,
https://memscales.eu, Call ICT-06-2019 Unconventional Nanoelectronics, project
number 871371). This arXiv version is a verbatim copy of the deliverable
report, with administrative information stripped. It collects a wide and varied
assortment of phenomena, models, research themes and algorithmic techniques
that are connected with timescale phenomena in the fields of computational
neuroscience, mathematics, machine learning and computer science, with a bias
toward aspects that are relevant for neuromorphic engineering. It turns out
that this theme is very rich indeed and spreads out in many directions which
defy a unified treatment. We collected several dozens of sub-themes, each of
which has been investigated in specialized settings (in the neurosciences,
mathematics, computer science and machine learning) and has been documented in
its own body of literature. The more we dived into this diversity, the more it
became clear that our first effort to compose a survey must remain sketchy and
partial. We conclude with a list of insights distilled from this survey which
give general guidelines for the design of future neuromorphic systems
Harnessing function from form: towards bio-inspired artificial intelligence in neuronal substrates
Despite the recent success of deep learning, the mammalian brain is still unrivaled when it comes
to interpreting complex, high-dimensional data streams like visual, auditory and somatosensory stimuli.
However, the underlying computational principles allowing the brain to deal with unreliable, high-dimensional
and often incomplete data while having a power consumption on the order of a few watt are still mostly
unknown.
In this work, we investigate how specific functionalities emerge from simple structures observed in the
mammalian cortex, and how these might be utilized in non-von Neumann devices like “neuromorphic
hardware”. Firstly, we show that an ensemble of deterministic, spiking neural networks can be shaped by
a simple, local learning rule to perform sampling-based Bayesian inference. This suggests a coding scheme
where spikes (or “action potentials”) represent samples of a posterior distribution, constrained by sensory
input, without the need for any source of stochasticity. Secondly, we introduce a top-down framework where
neuronal and synaptic dynamics are derived using a least action principle and gradient-based minimization.
Combined, neurosynaptic dynamics approximate real-time error backpropagation, mappable to mechanistic
components of cortical networks, whose dynamics can again be described within the proposed framework.
The presented models narrow the gap between well-defined, functional algorithms and their biophysical
implementation, improving our understanding of the computational principles the brain might employ.
Furthermore, such models are naturally translated to hardware mimicking the vastly parallel neural
structure of the brain, promising a strongly accelerated and energy-efficient implementation of powerful
learning and inference algorithms, which we demonstrate for the physical model system “BrainScaleS–1”
Learning and Decision Making in Social Contexts: Neural and Computational Models
Social interaction is one of humanity's defining features. Through it, we develop ideas, express emotions, and form relationships. In this thesis, we explore the topic of social cognition by building biologically-plausible computational models of learning and decision making. Our goal is to develop mechanistic explanations for how the brain performs a variety of social tasks, to test those theories by simulating neural networks, and to validate our models by comparing to human and animal data.
We begin by introducing social cognition from functional and anatomical perspectives, then present the Neural Engineering Framework, which we use throughout the thesis to specify functional brain models. Over the course of four chapters, we investigate many aspects of social cognition using these models. We begin by studying fear conditioning using an anatomically accurate model of the amygdala. We validate this model by comparing the response properties of our simulated neurons with real amygdala neurons, showing that simulated behavior is consistent with animal data, and exploring how simulated fear generalization relates to normal and anxious humans. Next, we show that biologically-detailed networks may realize cognitive operations that are essential for social cognition. We validate this approach by constructing a working memory network from multi-compartment cells and conductance-based synapses, then show that its mnemonic performance is comparable to animals performing a delayed match-to-sample task. In the next chapter, we study decision making and the tradeoffs between speed and accuracy: our network gathers information from the environment and tracks the value of choice alternatives, making a decision once certain criteria are met. We apply this model to a two-choice decision task, fit model parameters to recreate the behavior of individual humans, and reproduce the speed-accuracy tradeoff evident in the human population. Finally, we combine our networks for learning, working memory, and decision making into a cognitive agent that uses reinforcement learning to play a simple social game. We compare this model with two other cognitive architectures and with human data from an experiment we ran, and show that our three cognitive agents recreate important patterns in the human data, especially those related to social value orientation and cooperative behavior. Our concluding chapter summarizes our contributions to the field of social cognition and proposes directions for further research.
The main contribution of this thesis is the demonstration that a diverse set of social cognitive abilities may be explained, simulated, and validated using a functionally-descriptive, biologically-plausible theoretical framework. Our models lay a foundation for studying increasingly-sophisticated forms of social cognition in future work