51 research outputs found
An Efficient Method for online Detection of Polychronous Patterns in Spiking Neural Network
Polychronous neural groups are effective structures for the recognition of
precise spike-timing patterns but the detection method is an inefficient
multi-stage brute force process that works off-line on pre-recorded simulation
data. This work presents a new model of polychronous patterns that can capture
precise sequences of spikes directly in the neural simulation. In this scheme,
each neuron is assigned a randomized code that is used to tag the post-synaptic
neurons whenever a spike is transmitted. This creates a polychronous code that
preserves the order of pre-synaptic activity and can be registered in a hash
table when the post-synaptic neuron spikes. A polychronous code is a
sub-component of a polychronous group that will occur, along with others, when
the group is active. We demonstrate the representational and pattern
recognition ability of polychronous codes on a direction selective visual task
involving moving bars that is typical of a computation performed by simple
cells in the cortex. The computational efficiency of the proposed algorithm far
exceeds existing polychronous group detection methods and is well suited for
online detection.Comment: 17 pages, 8 figure
Design for a Darwinian Brain: Part 1. Philosophy and Neuroscience
Physical symbol systems are needed for open-ended cognition. A good way to
understand physical symbol systems is by comparison of thought to chemistry.
Both have systematicity, productivity and compositionality. The state of the
art in cognitive architectures for open-ended cognition is critically assessed.
I conclude that a cognitive architecture that evolves symbol structures in the
brain is a promising candidate to explain open-ended cognition. Part 2 of the
paper presents such a cognitive architecture.Comment: Darwinian Neurodynamics. Submitted as a two part paper to Living
Machines 2013 Natural History Museum, Londo
Detection and storage of multivariate temporal sequences by spiking pattern reverberators
We consider networks of spiking coincidence detectors in continuous time. A single detector is a finite state machine that emits a pulsatile signal whenever the number incoming inputs exceeds a threshold within a time window of some tolerance width. Such finite state models are well-suited for hardware implementations of neural networks, as on integrated circuits (IC) or field programmable arrays (FPGAs) but they also reflect the natural capability of many neurons to act as coincidence detectors. We pay special attention to a recurrent coupling structure, where the delays are tuned to a specific pattern. Applying this pattern as an external input leads to a self-sustained reverberation of the encoded pattern if the tuning is chosen correctly. In terms of the coupling structure, the tolerance and the refractory time of the individual coincidence detectors, we determine conditions for the uniqueness of the sustained activity, i.e., for the funcionality of the network as an unambiguous pattern detector. We also present numerical experiments, where the functionality of the proposed pattern detector is demonstrated replacing the simplistic finite state models by more realistic Hodgkin-Huxley neurons and we consider the possibility of implementing several pattern detectors using a set of shared coincidence detectors. We propose that inhibitory connections may aid to increase the precision of the pattern discrimination
The emergence of polychronous groups under varying input patterns, plasticity rules and network connectivities
Polychronous groups are unique temporal patterns of neural activity that exist implicitly within non-linear, recur- rently connected networks. Through Hebbian based learning these groups can be strengthened to give rise to larger chains of spatiotemporal activity. Compared to other structures such as Synfire chains, they have demonstrated the potential of a much larger capacity for memory or computation within spiking neural networks. Polychronous groups are believed to relate to the input signals under which they emerge. Here we investigate the quantity of groups that emerge from increasing numbers of repeating input patterns, whilst also comparing the differences between two plasticity rules and two network connectivities. We find – perhaps counter-intuitively – that fewer groups are formed as the number of repeating input patterns increases. Furthermore, we find that a tri-phasic learning rule gives rise to fewer groups than the ’classical’ double decaying exponential STDP plasticity window. It is also found that a scale-free network structure produces a similar quantity, but generally smaller groups than a randomly connected Erdös-Rényi structur
Homogenous Chaotic Network Serving as a Rate/Population Code to Temporal Code Converter
At present, it is obvious that different sections of nervous system utilize different methods for information coding. Primary afferent signals in most cases are represented in form of spike trains using a combination of rate coding and population coding while there are clear evidences that temporal coding is used in various regions of cortex. In the present paper, it is shown that conversion between these two coding schemes can be performed under certain conditions by a homogenous chaotic neural network. Interestingly, this effect can be achieved without network training and synaptic plasticity
Rigorous Neural Network Simulations: A Model Substantiation Methodology for Increasing the Correctness of Simulation Results in the Absence of Experimental Validation Data
The reproduction and replication of scientific results is an indispensable aspect of good scientific practice, enabling previous studies to be built upon and increasing our level of confidence in them. However, reproducibility and replicability are not sufficient: an incorrect result will be accurately reproduced if the same incorrect methods are used. For the field of simulations of complex neural networks, the causes of incorrect results vary from insufficient model implementations and data analysis methods, deficiencies in workmanship (e.g., simulation planning, setup, and execution) to errors induced by hardware constraints (e.g., limitations in numerical precision). In order to build credibility, methods such as verification and validation have been developed, but they are not yet well-established in the field of neural network modeling and simulation, partly due to ambiguity concerning the terminology. In this manuscript, we propose a terminology for model verification and validation in the field of neural network modeling and simulation. We outline a rigorous workflow derived from model verification and validation methodologies for increasing model credibility when it is not possible to validate against experimental data. We compare a published minimal spiking network model capable of exhibiting the development of polychronous groups, to its reproduction on the SpiNNaker neuromorphic system, where we consider the dynamics of several selected network states. As a result, by following a formalized process, we show that numerical accuracy is critically important, and even small deviations in the dynamics of individual neurons are expressed in the dynamics at network level
The influence of dopamine on prediction, action and learning
In this thesis I explore functions of the neuromodulator dopamine in the context
of autonomous learning and behaviour. I first investigate dopaminergic influence
within a simulated agent-based model, demonstrating how modulation of
synaptic plasticity can enable reward-mediated learning that is both adaptive and
self-limiting. I describe how this mechanism is driven by the dynamics of agentenvironment
interaction and consequently suggest roles for both complex spontaneous
neuronal activity and specific neuroanatomy in the expression of early, exploratory
behaviour. I then show how the observed response of dopamine neurons
in the mammalian basal ganglia may also be modelled by similar processes involving
dopaminergic neuromodulation and cortical spike-pattern representation within
an architecture of counteracting excitatory and inhibitory neural pathways, reflecting
gross mammalian neuroanatomy. Significantly, I demonstrate how combined
modulation of synaptic plasticity and neuronal excitability enables specific (timely)
spike-patterns to be recognised and selectively responded to by efferent neural populations,
therefore providing a novel spike-timing based implementation of the hypothetical
‘serial-compound’ representation suggested by temporal difference learning.
I subsequently discuss more recent work, focused upon modelling those complex
spike-patterns observed in cortex. Here, I describe neural features likely to contribute
to the expression of such activity and subsequently present novel simulation
software allowing for interactive exploration of these factors, in a more comprehensive
neural model that implements both dynamical synapses and dopaminergic
neuromodulation. I conclude by describing how the work presented ultimately suggests
an integrated theory of autonomous learning, in which direct coupling of agent
and environment supports a predictive coding mechanism, bootstrapped in early
development by a more fundamental process of trial-and-error learning
Extending Transfer Entropy Improves Identification of Effective Connectivity in a Spiking Cortical Network Model
Transfer entropy (TE) is an information-theoretic measure which has received recent attention in neuroscience for its potential to identify effective connectivity between neurons. Calculating TE for large ensembles of spiking neurons is computationally intensive, and has caused most investigators to probe neural interactions at only a single time delay and at a message length of only a single time bin. This is problematic, as synaptic delays between cortical neurons, for example, range from one to tens of milliseconds. In addition, neurons produce bursts of spikes spanning multiple time bins. To address these issues, here we introduce a free software package that allows TE to be measured at multiple delays and message lengths. To assess performance, we applied these extensions of TE to a spiking cortical network model (Izhikevich, 2006) with known connectivity and a range of synaptic delays. For comparison, we also investigated single-delay TE, at a message length of one bin (D1TE), and cross-correlation (CC) methods. We found that D1TE could identify 36% of true connections when evaluated at a false positive rate of 1%. For extended versions of TE, this dramatically improved to 73% of true connections. In addition, the connections correctly identified by extended versions of TE accounted for 85% of the total synaptic weight in the network. Cross correlation methods generally performed more poorly than extended TE, but were useful when data length was short. A computational performance analysis demonstrated that the algorithm for extended TE, when used on currently available desktop computers, could extract effective connectivity from 1 hr recordings containing 200 neurons in ∼5 min. We conclude that extending TE to multiple delays and message lengths improves its ability to assess effective connectivity between spiking neurons. These extensions to TE soon could become practical tools for experimentalists who record hundreds of spiking neurons
A review of learning in biologically plausible spiking neural networks
Artificial neural networks have been used as a powerful processing tool in various areas such as pattern recognition, control, robotics, and bioinformatics. Their wide applicability has encouraged researchers to improve artificial neural networks by investigating the biological brain. Neurological research has significantly progressed in recent years and continues to reveal new characteristics of biological neurons. New technologies can now capture temporal changes in the internal activity of the brain in more detail and help clarify the relationship between brain activity and the perception of a given stimulus. This new knowledge has led to a new type of artificial neural network, the Spiking Neural Network (SNN), that draws more faithfully on biological properties to provide higher processing abilities. A review of recent developments in learning of spiking neurons is presented in this paper. First the biological background of SNN learning algorithms is reviewed. The important elements of a learning algorithm such as the neuron model, synaptic plasticity, information encoding and SNN topologies are then presented. Then, a critical review of the state-of-the-art learning algorithms for SNNs using single and multiple spikes is presented. Additionally, deep spiking neural networks are reviewed, and challenges and opportunities in the SNN field are discussed
- …