799 research outputs found
An Online Unsupervised Structural Plasticity Algorithm for Spiking Neural Networks
In this article, we propose a novel Winner-Take-All (WTA) architecture
employing neurons with nonlinear dendrites and an online unsupervised
structural plasticity rule for training it. Further, to aid hardware
implementations, our network employs only binary synapses. The proposed
learning rule is inspired by spike time dependent plasticity (STDP) but differs
for each dendrite based on its activation level. It trains the WTA network
through formation and elimination of connections between inputs and synapses.
To demonstrate the performance of the proposed network and learning rule, we
employ it to solve two, four and six class classification of random Poisson
spike time inputs. The results indicate that by proper tuning of the inhibitory
time constant of the WTA, a trade-off between specificity and sensitivity of
the network can be achieved. We use the inhibitory time constant to set the
number of subpatterns per pattern we want to detect. We show that while the
percentage of successful trials are 92%, 88% and 82% for two, four and six
class classification when no pattern subdivisions are made, it increases to
100% when each pattern is subdivided into 5 or 10 subpatterns. However, the
former scenario of no pattern subdivision is more jitter resilient than the
later ones.Comment: 11 pages, 10 figures, journa
Hardware-Amenable Structural Learning for Spike-based Pattern Classification using a Simple Model of Active Dendrites
This paper presents a spike-based model which employs neurons with
functionally distinct dendritic compartments for classifying high dimensional
binary patterns. The synaptic inputs arriving on each dendritic subunit are
nonlinearly processed before being linearly integrated at the soma, giving the
neuron a capacity to perform a large number of input-output mappings. The model
utilizes sparse synaptic connectivity; where each synapse takes a binary value.
The optimal connection pattern of a neuron is learned by using a simple
hardware-friendly, margin enhancing learning algorithm inspired by the
mechanism of structural plasticity in biological neurons. The learning
algorithm groups correlated synaptic inputs on the same dendritic branch. Since
the learning results in modified connection patterns, it can be incorporated
into current event-based neuromorphic systems with little overhead. This work
also presents a branch-specific spike-based version of this structural
plasticity rule. The proposed model is evaluated on benchmark binary
classification problems and its performance is compared against that achieved
using Support Vector Machine (SVM) and Extreme Learning Machine (ELM)
techniques. Our proposed method attains comparable performance while utilizing
10 to 50% less computational resources than the other reported techniques.Comment: Accepted for publication in Neural Computatio
Dynamical principles in neuroscience
Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA
Generating functionals for computational intelligence: the Fisher information as an objective function for self-limiting Hebbian learning rules
Generating functionals may guide the evolution of a dynamical system and
constitute a possible route for handling the complexity of neural networks as
relevant for computational intelligence. We propose and explore a new objective
function, which allows to obtain plasticity rules for the afferent synaptic
weights. The adaption rules are Hebbian, self-limiting, and result from the
minimization of the Fisher information with respect to the synaptic flux. We
perform a series of simulations examining the behavior of the new learning
rules in various circumstances. The vector of synaptic weights aligns with the
principal direction of input activities, whenever one is present. A linear
discrimination is performed when there are two or more principal directions;
directions having bimodal firing-rate distributions, being characterized by a
negative excess kurtosis, are preferred. We find robust performance and full
homeostatic adaption of the synaptic weights results as a by-product of the
synaptic flux minimization. This self-limiting behavior allows for stable
online learning for arbitrary durations. The neuron acquires new information
when the statistics of input activities is changed at a certain point of the
simulation, showing however, a distinct resilience to unlearn previously
acquired knowledge. Learning is fast when starting with randomly drawn synaptic
weights and substantially slower when the synaptic weights are already fully
adapted
Memory capacity in the hippocampus
Neural assemblies in hippocampus encode positions. During rest, the hippocam-
pus replays sequences of neural activity seen during awake behavior. This replay
is linked to memory consolidation and mental exploration of the environment. Re-
current networks can be used to model the replay of sequential activity. Multiple
sequences can be stored in the synaptic connections. To achieve a high mem-
ory capacity, recurrent networks require a pattern separation mechanism. Such a
mechanism is global remapping, observed in place cell populations. A place cell
fires at a particular position of an environment and is silent elsewhere. Multiple
place cells usually cover an environment with their firing fields. Small changes in
the environment or context of a behavioral task can cause global remapping, i.e.
profound changes in place cell firing fields. Global remapping causes some cells to
cease firing, other silent cells to gain a place field, and other place cells to move
their firing field and change their peak firing rate. The effect is strong enough to
make global remapping a viable pattern separation mechanism.
We model two mechanisms that improve the memory capacity of recurrent net-
works. The effect of inhibition on replay in a recurrent network is modeled using
binary neurons and binary synapses. A mean field approximation is used to de-
termine the optimal parameters for the inhibitory neuron population. Numerical
simulations of the full model were carried out to verify the predictions of the mean
field model. A second model analyzes a hypothesized global remapping mecha-
nism, in which grid cell firing is used as feed forward input to place cells. Grid
cells have multiple firing fields in the same environment, arranged in a hexagonal
grid. Grid cells can be used in a model as feed forward inputs to place cells to produce place fields. In these grid-to-place cell models, shifts in the grid cell firing
patterns cause remapping in the place cell population. We analyze the capacity of
such a system to create sets of separated patterns, i.e. how many different spatial
codes can be generated. The limiting factor are the synapses connecting grid cells
to place cells. To assess their capacity, we produce different place codes in place
and grid cell populations, by shuffling place field positions and shifting grid fields
of grid cells. Then we use Hebbian learning to increase the synaptic weights be-
tween grid and place cells for each set of grid and place code. The capacity limit
is reached when synaptic interference makes it impossible to produce a place code
with sufficient spatial acuity from grid cell firing. Additionally, it is desired to
also maintain the place fields compact, or sparse if seen from a coding standpoint.
Of course, as more environments are stored, the sparseness is lost. Interestingly,
place cells lose the sparseness of their firing fields much earlier than their spatial
acuity.
For the sequence replay model we are able to increase capacity in a simulated
recurrent network by including an inhibitory population. We show that even
in this more complicated case, capacity is improved. We observe oscillations in
the average activity of both excitatory and inhibitory neuron populations. The
oscillations get stronger at the capacity limit. In addition, at the capacity limit,
rather than observing a sudden failure of replay, we find sequences are replayed
transiently for a couple of time steps before failing. Analyzing the remapping
model, we find that, as we store more spatial codes in the synapses, first the
sparseness of place fields is lost. Only later do we observe a decay in spatial
acuity of the code. We found two ways to maintain sparse place fields while
achieving a high capacity: inhibition between place cells, and partitioning the
place cell population so that learning affects only a small fraction of them in
each environment. We present scaling predictions that suggest that hundreds of
thousands of spatial codes can be produced by this pattern separation mechanism.
The effect inhibition has on the replay model is two-fold. Capacity is increased, and
the graceful transition from full replay to failure allows for higher capacities when
using short sequences. Additional mechanisms not explored in this model could
be at work to concatenate these short sequences, or could perform more complex operations on them. The interplay of excitatory and inhibitory populations gives
rise to oscillations, which are strongest at the capacity limit. The oscillation
draws a picture of how a memory mechanism can cause hippocampal oscillations
as observed in experiments. In the remapping model we showed that sparseness of
place cell firing is constraining the capacity of this pattern separation mechanism.
Grid codes outperform place codes regarding spatial acuity, as shown in Mathis et
al. (2012). Our model shows that the grid-to-place transformation is not harnessing
the full spatial information from the grid code in order to maintain sparse place
fields. This suggests that the two codes are independent, and communication
between the areas might be mostly for synchronization. High spatial acuity seems
to be a specialization of the grid code, while the place code is more suitable for
memory tasks.
In a detailed model of hippocampal replay we show that feedback inhibition can
increase the number of sequences that can be replayed. The effect of inhibition
on capacity is determined using a meanfield model, and the results are verified
with numerical simulations of the full network. Transient replay is found at the
capacity limit, accompanied by oscillations that resemble sharp wave ripples in
hippocampus. In a second model
Hippocampal replay of neuronal activity is linked to memory consolidation and
mental exploration. Furthermore, replay is a potential neural correlate of episodic
memory. To model hippocampal sequence replay, recurrent neural networks are
used. Memory capacity of such networks is of great interest to determine their
biological feasibility. And additionally, any mechanism that improves capacity has
explanatory power. We investigate two such mechanisms.
The first mechanism to improve capacity is global, unspecific feedback inhibition
for the recurrent network. In a simplified meanfield model we show that capacity
is indeed improved.
The second mechanism that increases memory capacity is pattern separation. In
the spatial context of hippocampal place cell firing, global remapping is one way
to achieve pattern separation. Changes in the environment or context of a task
cause global remapping. During global remapping, place cell firing changes in unpredictable ways: cells shift their place fields, or fully cease firing, and formerly
silent cells acquire place fields. Global remapping can be triggered by subtle
changes in grid cells that give feed-forward inputs to hippocampal place cells.
We investigate the capacity of the underlying synaptic connections, defined as the
number of different environments that can be represented at a given spatial acuity.
We find two essential conditions to achieve a high capacity and sparse place fields:
inhibition between place cells, and partitioning the place cell population so that
learning affects only a small fraction of them in each environments. We also find
that sparsity of place fields is the constraining factor of the model rather than
spatial acuity. Since the hippocampal place code is sparse, we conclude that the
hippocampus does not fully harness the spatial information available in the grid
code. The two codes of space might thus serve different purposes
Investigation of Synapto-dendritic Kernel Adapting Neuron models and their use in spiking neuromorphic architectures
The motivation for this thesis is idea that abstract, adaptive, hardware efficient, inter-neuronal transfer functions (or kernels) which carry information in the form of postsynaptic membrane potentials, are the most important (and erstwhile missing) element in neuromorphic implementations of Spiking Neural Networks (SNN). In the absence of such abstract kernels, spiking neuromorphic systems must realize very large numbers of synapses and their associated connectivity. The resultant hardware and bandwidth limitations create difficult tradeoffs which diminish the usefulness of such systems.
In this thesis a novel model of spiking neurons is proposed. The proposed Synapto-dendritic Kernel Adapting Neuron (SKAN) uses the adaptation of their synapto-dendritic kernels in conjunction with an adaptive threshold to perform unsupervised learning and inference on spatio-temporal spike patterns. The hardware and connectivity requirements of the neuron model are minimized through the use of simple accumulator-based kernels as well as through the use of timing information to perform a winner take all operation between the neurons. The learning and inference operations of SKAN are characterized and shown to be robust across a range of noise environments.
Next, the SKAN model is augmented with a simplified hardware-efficient model of Spike Timing Dependent Plasticity (STDP). In biology STDP is the mechanism which allows neurons to learn spatio-temporal spike patterns. However when the proposed SKAN model is augmented with a simplified STDP rule, where the synaptic kernel is used as a binary flag that enable synaptic potentiation, the result is a synaptic encoding of afferent Signal to Noise Ratio (SNR). In this combined model the neuron not only learns the target spatio-temporal spike patterns but also weighs each channel independently according to its signal to noise ratio. Additionally a novel approach is presented to achieving homeostatic plasticity in digital hardware which reduces hardware cost by eliminating the need for multipliers.
Finally the behavior and potential utility of this combined model is investigated in a range of noise conditions and the digital hardware resource utilization of SKAN and SKAN + STDP is detailed using Field Programmable Gate Arrays (FPGA)
Are Numerical Symbols Fundamental to Neural Computation?
Abstract: Neuroclassicism is the view that cognition is computation and that core mental processes, such as perception, memory, and reasoning are products of digital computations realized in neural tissue. Cognitive psychologist C. R. Gallistel uses this classical framework to argue that all cognitive information processing is based on symbolic operations performed over quantitative values (i.e. numbers) stored in the brain, much like a digital computer. Assuming this hypothesis, he investigates how the brain stores quantitative information (i.e. the numerical symbols involved in neural computation). He claims that it is more plausible that memories for numbers are stored within molecular mechanisms inside the neuron, rather than within specific patterns of cell connectivity (the substrate for memory storage assumed by the traditional Hebbian plastic synapse model). In this paper, I dissect and critique Gallistel’s argument, which I find to be undermined by the findings of contemporary neuroscience
- …