1,016 research outputs found
An Online Unsupervised Structural Plasticity Algorithm for Spiking Neural Networks
In this article, we propose a novel Winner-Take-All (WTA) architecture
employing neurons with nonlinear dendrites and an online unsupervised
structural plasticity rule for training it. Further, to aid hardware
implementations, our network employs only binary synapses. The proposed
learning rule is inspired by spike time dependent plasticity (STDP) but differs
for each dendrite based on its activation level. It trains the WTA network
through formation and elimination of connections between inputs and synapses.
To demonstrate the performance of the proposed network and learning rule, we
employ it to solve two, four and six class classification of random Poisson
spike time inputs. The results indicate that by proper tuning of the inhibitory
time constant of the WTA, a trade-off between specificity and sensitivity of
the network can be achieved. We use the inhibitory time constant to set the
number of subpatterns per pattern we want to detect. We show that while the
percentage of successful trials are 92%, 88% and 82% for two, four and six
class classification when no pattern subdivisions are made, it increases to
100% when each pattern is subdivided into 5 or 10 subpatterns. However, the
former scenario of no pattern subdivision is more jitter resilient than the
later ones.Comment: 11 pages, 10 figures, journa
Hardware-Amenable Structural Learning for Spike-based Pattern Classification using a Simple Model of Active Dendrites
This paper presents a spike-based model which employs neurons with
functionally distinct dendritic compartments for classifying high dimensional
binary patterns. The synaptic inputs arriving on each dendritic subunit are
nonlinearly processed before being linearly integrated at the soma, giving the
neuron a capacity to perform a large number of input-output mappings. The model
utilizes sparse synaptic connectivity; where each synapse takes a binary value.
The optimal connection pattern of a neuron is learned by using a simple
hardware-friendly, margin enhancing learning algorithm inspired by the
mechanism of structural plasticity in biological neurons. The learning
algorithm groups correlated synaptic inputs on the same dendritic branch. Since
the learning results in modified connection patterns, it can be incorporated
into current event-based neuromorphic systems with little overhead. This work
also presents a branch-specific spike-based version of this structural
plasticity rule. The proposed model is evaluated on benchmark binary
classification problems and its performance is compared against that achieved
using Support Vector Machine (SVM) and Extreme Learning Machine (ELM)
techniques. Our proposed method attains comparable performance while utilizing
10 to 50% less computational resources than the other reported techniques.Comment: Accepted for publication in Neural Computatio
Liquid State Machine with Dendritically Enhanced Readout for Low-power, Neuromorphic VLSI Implementations
In this paper, we describe a new neuro-inspired, hardware-friendly readout
stage for the liquid state machine (LSM), a popular model for reservoir
computing. Compared to the parallel perceptron architecture trained by the
p-delta algorithm, which is the state of the art in terms of performance of
readout stages, our readout architecture and learning algorithm can attain
better performance with significantly less synaptic resources making it
attractive for VLSI implementation. Inspired by the nonlinear properties of
dendrites in biological neurons, our readout stage incorporates neurons having
multiple dendrites with a lumped nonlinearity. The number of synaptic
connections on each branch is significantly lower than the total number of
connections from the liquid neurons and the learning algorithm tries to find
the best 'combination' of input connections on each branch to reduce the error.
Hence, the learning involves network rewiring (NRW) of the readout network
similar to structural plasticity observed in its biological counterparts. We
show that compared to a single perceptron using analog weights, this
architecture for the readout can attain, even by using the same number of
binary valued synapses, up to 3.3 times less error for a two-class spike train
classification problem and 2.4 times less error for an input rate approximation
task. Even with 60 times larger synapses, a group of 60 parallel perceptrons
cannot attain the performance of the proposed dendritically enhanced readout.
An additional advantage of this method for hardware implementations is that the
'choice' of connectivity can be easily implemented exploiting address event
representation (AER) protocols commonly used in current neuromorphic systems
where the connection matrix is stored in memory. Also, due to the use of binary
synapses, our proposed method is more robust against statistical variations.Comment: 14 pages, 19 figures, Journa
Single Biological Neurons as Temporally Precise Spatio-Temporal Pattern Recognizers
This PhD thesis is focused on the central idea that single neurons in the
brain should be regarded as temporally precise and highly complex
spatio-temporal pattern recognizers. This is opposed to the prevalent view of
biological neurons as simple and mainly spatial pattern recognizers by most
neuroscientists today. In this thesis, I will attempt to demonstrate that this
is an important distinction, predominantly because the above-mentioned
computational properties of single neurons have far-reaching implications with
respect to the various brain circuits that neurons compose, and on how
information is encoded by neuronal activity in the brain. Namely, that these
particular "low-level" details at the single neuron level have substantial
system-wide ramifications. In the introduction we will highlight the main
components that comprise a neural microcircuit that can perform useful
computations and illustrate the inter-dependence of these components from a
system perspective. In chapter 1 we discuss the great complexity of the
spatio-temporal input-output relationship of cortical neurons that are the
result of morphological structure and biophysical properties of the neuron. In
chapter 2 we demonstrate that single neurons can generate temporally precise
output patterns in response to specific spatio-temporal input patterns with a
very simple biologically plausible learning rule. In chapter 3, we use the
differentiable deep network analog of a realistic cortical neuron as a tool to
approximate the gradient of the output of the neuron with respect to its input
and use this capability in an attempt to teach the neuron to perform nonlinear
XOR operation. In chapter 4 we expand chapter 3 to describe extension of our
ideas to neuronal networks composed of many realistic biological spiking
neurons that represent either small microcircuits or entire brain regions
A synaptic learning rule for exploiting nonlinear dendritic computation
Information processing in the brain depends on the integration of synaptic input distributed throughout neuronal dendrites. Dendritic integration is a hierarchical process, proposed to be equivalent to integration by a multilayer network, potentially endowing single neurons with substantial computational power. However, whether neurons can learn to harness dendritic properties to realize this potential is unknown. Here, we develop a learning rule from dendritic cable theory and use it to investigate the processing capacity of a detailed pyramidal neuron model. We show that computations using spatial or temporal features of synaptic input patterns can be learned, and even synergistically combined, to solve a canonical nonlinear feature-binding problem. The voltage dependence of the learning rule drives coactive synapses to engage dendritic nonlinearities, whereas spike-timing dependence shapes the time course of subthreshold potentials. Dendritic input-output relationships can therefore be flexibly tuned through synaptic plasticity, allowing optimal implementation of nonlinear functions by single neurons
Contributions to models of single neuron computation in striatum and cortex
A deeper understanding is required of how a single neuron utilizes its nonlinear subcellular devices to generate complex neuronal dynamics. Two compartmental models of cortex and striatum are accurately formulated and firmly grounded in the experimental reality of electrophysiology to address the questions: how striatal projection neurons implement location-dependent dendritic integration to carry out association-based computation and how cortical pyramidal neurons strategically exploit the type and location of synaptic contacts to enrich its computational capacities.Neuronale Zellen transformieren kontinuierliche Signale in diskrete Zeitserien von Aktionspotentialen und kodieren damit Perzeptionen und interne Zustände. Kompartiment-Modelle werden formuliert von Nervenzellen im Kortex und Striatum, die elektrophysiologisch fundiert sind, um spezifische Fragen zu adressieren: i) Inwiefern implementieren Projektionen vom Striatum ortsabhängige dendritische Integration, um Assoziationens-basierte Berechnungen zu realisieren? ii) Inwiefern nutzen kortikale Zellen den Typ und den Ort, um die durch sie realisierten Berechnungen zu optimieren
Electrical Compartmentalization in Neurons
The dendritic tree of neurons plays an important role in information processing in the brain. While it is thought that dendrites require independent subunits to perform most of their computations, it is still not understood how they compartmentalize into functional subunits. Here, we show how these subunits can be deduced from the properties of dendrites. We devised a formalism that links the dendritic arborization to an impedance-based tree graph and show how the topology of this graph reveals independent subunits. This analysis reveals that cooperativity between synapses decreases slowly with increasing electrical separation and thus that few independent subunits coexist. We nevertheless find that balanced inputs or shunting inhibition can modify this topology and increase the number and size of the subunits in a context-dependent manner. We also find that this dynamic recompartmentalization can enable branch-specific learning of stimulus features. Analysis of dendritic patch-clamp recording experiments confirmed our theoretical predictions.Peer reviewe
Towards NeuroAI: Introducing Neuronal Diversity into Artificial Neural Networks
Throughout history, the development of artificial intelligence, particularly
artificial neural networks, has been open to and constantly inspired by the
increasingly deepened understanding of the brain, such as the inspiration of
neocognitron, which is the pioneering work of convolutional neural networks.
Per the motives of the emerging field: NeuroAI, a great amount of neuroscience
knowledge can help catalyze the next generation of AI by endowing a network
with more powerful capabilities. As we know, the human brain has numerous
morphologically and functionally different neurons, while artificial neural
networks are almost exclusively built on a single neuron type. In the human
brain, neuronal diversity is an enabling factor for all kinds of biological
intelligent behaviors. Since an artificial network is a miniature of the human
brain, introducing neuronal diversity should be valuable in terms of addressing
those essential problems of artificial networks such as efficiency,
interpretability, and memory. In this Primer, we first discuss the
preliminaries of biological neuronal diversity and the characteristics of
information transmission and processing in a biological neuron. Then, we review
studies of designing new neurons for artificial networks. Next, we discuss what
gains can neuronal diversity bring into artificial networks and exemplary
applications in several important fields. Lastly, we discuss the challenges and
future directions of neuronal diversity to explore the potential of NeuroAI
- …