24 research outputs found

    Spatio-temporal pattern recognizers using spiking neurons and spike-timing-dependent plasticity.

    Get PDF
    It has previously been shown that by using spike-timing-dependent plasticity (STDP), neurons can adapt to the beginning of a repeating spatio-temporal firing pattern in their input. In the present work, we demonstrate that this mechanism can be extended to train recognizers for longer spatio-temporal input signals. Using a number of neurons that are mutually connected by plastic synapses and subject to a global winner-takes-all mechanism, chains of neurons can form where each neuron is selective to a different segment of a repeating input pattern, and the neurons are feed-forwardly connected in such a way that both the correct input segment and the firing of the previous neurons are required in order to activate the next neuron in the chain. This is akin to a simple class of finite state automata. We show that nearest-neighbor STDP (where only the pre-synaptic spike most recent to a post-synaptic one is considered) leads to "nearest-neighbor" chains where connections only form between subsequent states in a chain (similar to classic "synfire chains"). In contrast, "all-to-all spike-timing-dependent plasticity" (where all pre- and post-synaptic spike pairs matter) leads to multiple connections that can span several temporal stages in the chain; these connections respect the temporal order of the neurons. It is also demonstrated that previously learnt individual chains can be "stitched together" by repeatedly presenting them in a fixed order. This way longer sequence recognizers can be formed, and potentially also nested structures. Robustness of recognition with respect to speed variations in the input patterns is shown to depend on rise-times of post-synaptic potentials and the membrane noise. It is argued that the memory capacity of the model is high, but could theoretically be increased using sparse codes

    An Efficient Method for online Detection of Polychronous Patterns in Spiking Neural Network

    Get PDF
    Polychronous neural groups are effective structures for the recognition of precise spike-timing patterns but the detection method is an inefficient multi-stage brute force process that works off-line on pre-recorded simulation data. This work presents a new model of polychronous patterns that can capture precise sequences of spikes directly in the neural simulation. In this scheme, each neuron is assigned a randomized code that is used to tag the post-synaptic neurons whenever a spike is transmitted. This creates a polychronous code that preserves the order of pre-synaptic activity and can be registered in a hash table when the post-synaptic neuron spikes. A polychronous code is a sub-component of a polychronous group that will occur, along with others, when the group is active. We demonstrate the representational and pattern recognition ability of polychronous codes on a direction selective visual task involving moving bars that is typical of a computation performed by simple cells in the cortex. The computational efficiency of the proposed algorithm far exceeds existing polychronous group detection methods and is well suited for online detection.Comment: 17 pages, 8 figure

    Learning, self-organisation and homeostasis in spiking neuron networks using spike-timing dependent plasticity

    Get PDF
    Spike-timing dependent plasticity is a learning mechanism used extensively within neural modelling. The learning rule has been shown to allow a neuron to find the onset of a spatio-temporal pattern repeated among its afferents. In this thesis, the first question addressed is ‘what does this neuron learn?’ With a spiking neuron model and linear prediction, evidence is adduced that the neuron learns two components: (1) the level of average background activity and (2) specific spike times of a pattern. Taking advantage of these findings, a network is developed that can train recognisers for longer spatio-temporal input signals using spike-timing dependent plasticity. Using a number of neurons that are mutually connected by plastic synapses and subject to a global winner-takes-all mechanism, chains of neurons can form where each neuron is selective to a different segment of a repeating input pattern, and the neurons are feedforwardly connected in such a way that both the correct stimulus and the firing of the previous neurons are required in order to activate the next neuron in the chain. This is akin to a simple class of finite state automata. Following this, a novel resource-based STDP learning rule is introduced. The learning rule has several advantages over typical implementations of STDP and results in synaptic statistics which match favourably with those observed experimentally. For example, synaptic weight distributions and the presence of silent synapses match experimental data

    Single Biological Neurons as Temporally Precise Spatio-Temporal Pattern Recognizers

    Full text link
    This PhD thesis is focused on the central idea that single neurons in the brain should be regarded as temporally precise and highly complex spatio-temporal pattern recognizers. This is opposed to the prevalent view of biological neurons as simple and mainly spatial pattern recognizers by most neuroscientists today. In this thesis, I will attempt to demonstrate that this is an important distinction, predominantly because the above-mentioned computational properties of single neurons have far-reaching implications with respect to the various brain circuits that neurons compose, and on how information is encoded by neuronal activity in the brain. Namely, that these particular "low-level" details at the single neuron level have substantial system-wide ramifications. In the introduction we will highlight the main components that comprise a neural microcircuit that can perform useful computations and illustrate the inter-dependence of these components from a system perspective. In chapter 1 we discuss the great complexity of the spatio-temporal input-output relationship of cortical neurons that are the result of morphological structure and biophysical properties of the neuron. In chapter 2 we demonstrate that single neurons can generate temporally precise output patterns in response to specific spatio-temporal input patterns with a very simple biologically plausible learning rule. In chapter 3, we use the differentiable deep network analog of a realistic cortical neuron as a tool to approximate the gradient of the output of the neuron with respect to its input and use this capability in an attempt to teach the neuron to perform nonlinear XOR operation. In chapter 4 we expand chapter 3 to describe extension of our ideas to neuronal networks composed of many realistic biological spiking neurons that represent either small microcircuits or entire brain regions

    Editorial: State-dependent brain computation

    Get PDF
    International audienceThe brain is a self-organizing system, which has evolved such that neuronal responses and related behavior are continuously adapted with respect to the external and internal context. This powerful capability is achieved through the modulation of neuronal interactions depending on the history of previously processed information. In particular, the brain updates its connections as it learns successful versus unsuccessful strategies. The resulting connectivity changes, together with stochastic processes (i.e., noise) influence ongoing neuronal dynamics. The role of such state-dependent fluctuations may be one of the fundamental computational properties of the brain, being pervasively present in human behavior and leaving a distinctive fingerprint in neuroscience data. This development is captured by the present Frontiers Research Topic, " State-Dependent Brain Computation

    Brain-Inspired Spatio-Temporal Associative Memories for Neuroimaging Data Classification: EEG and fMRI

    Get PDF
    Humans learn from a lot of information sources to make decisions. Once this information is learned in the brain, spatio-temporal associations are made, connecting all these sources (variables) in space and time represented as brain connectivity. In reality, to make a decision, we usually have only part of the information, either as a limited number of variables, limited time to make the decision, or both. The brain functions as a spatio-temporal associative memory. Inspired by the ability of the human brain, a brain-inspired spatio-temporal associative memory was proposed earlier that utilized the NeuCube brain-inspired spiking neural network framework. Here we applied the STAM framework to develop STAM for neuroimaging data, on the cases of EEG and fMRI, resulting in STAM-EEG and STAM-fMRI. This paper showed that once a NeuCube STAM classification model was trained on a complete spatio-temporal EEG or fMRI data, it could be recalled using only part of the time series, or/and only part of the used variables. We evaluated both temporal and spatial association and generalization accuracy accordingly. This was a pilot study that opens the field for the development of classification systems on other neuroimaging data, such as longitudinal MRI data, trained on complete data but recalled on partial data. Future research includes STAM that will work on data, collected across different settings, in different labs and clinics, that may vary in terms of the variables and time of data collection, along with other parameters. The proposed STAM will be further investigated for early diagnosis and prognosis of brain conditions and for diagnostic/prognostic marker discovery

    The Evolution, Analysis, and Design of Minimal Spiking Neural Networks for Temporal Pattern Recognition

    Get PDF
    All sensory stimuli are temporal in structure. How a pattern of action potentials encodes the information received from the sensory stimuli is an important research question in neurosciencce. Although it is clear that information is carried by the number or the timing of spikes, the information processing in the nervous system is poorly understood. The desire to understand information processing in the animal brain led to the development of spiking neural networks (SNNs). Understanding information processing in spiking neural networks may give us an insight into the information processing in the animal brain. One way to understand the mechanisms which enable SNNs to perform a computational task is to associate the structural connectivity of the network with the corresponding functional behaviour. This work demonstrates the structure-function mapping of spiking networks evolved (or handcrafted) for recognising temporal patterns. The SNNs are composed of simple yet biologically meaningful adaptive exponential integrate-and-fire (AdEx) neurons. The computational task can be described as identifying a subsequence of three signals (say ABC) in a random input stream of signals ("ABBBCCBABABCBBCAC"). The topology and connection weights of the networks are optimised using a genetic algorithm such that the network output spikes only for the correct input pattern and remains silent for all others. The fitness function rewards the network output for spiking after receiving the correct pattern and penalises spikes elsewhere. To analyse the effect of noise, two types of noise are introduced during evolution: (i) random fluctuations of the membrane potential of neurons in the network at every network step, (ii) random variations of the duration of the silent interval between input signals. It has been observed that evolution in the presence of noise produced networks that were robust to perturbation of neuronal parameters. Moreover, the networks also developed a form of memory, enabling them to maintain network states in the absence of input activity. It has been demonstrated that the network states of an evolved network have a one-to-one correspondence with the states of a finite-state transducer (FST) { a model of computation for time-structured data. The analysis of networks indicated that the task of recognition is accomplished by transitions between network states. Evolution may overproduce synaptic connections, pruning these superfluous connections pronounced structural similarities among individuals obtained from different independent runs. Moreover, the analysis of the pruned networks highlighted that memory is a property of self-excitation in the network. Neurons with self-excitatory loops (also called autapses) could sustain spiking activity indefinitely in the absence of input activity. To recognise a pattern of length n, a network requires n+1 network states, where n states are maintained actively with autapses and the penultimate state is maintained passively by no activity in the network. Simultaneously, the role of other connections in the network is identified. Of particular interest, three interneurons in the network are found to have a specialized role: (i) the lock neuron is always active, preventing the output from spiking unless it is released by the penultimate signal in the correct pattern, exposing the output neuron to spike for the correct last signal, (ii) the switch neuron is responsible for switching the network between the inter-signal states and the start state, and (iii) the accept neuron produces spikes in the output neuron when the network receives the last correct input. It also sends a signal to the switch neuron, transforming the network back into the start state Understanding how information is processed in the evolved networks led to handcrafting network topologies for recognising more extended patterns. The proposed rules can extend network topologies to recognize temporal patterns up to length six. To validate the handcrafted topology, a genetic algorithm is used to optimise its connection weights. It has been observed that the maximum number of active neurons representing a state in the network increases with the pattern length. Therefore, the suggested rules can handcraft network topologies only up to length 6. Handcrafting network topologies, representing a network state with a fixed number of active neurons requires further investigation

    Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience

    Get PDF
    This essay is presented with two principal objectives in mind: first, to document the prevalence of fractals at all levels of the nervous system, giving credence to the notion of their functional relevance; and second, to draw attention to the as yet still unresolved issues of the detailed relationships among power law scaling, self-similarity, and self-organized criticality. As regards criticality, I will document that it has become a pivotal reference point in Neurodynamics. Furthermore, I will emphasize the not yet fully appreciated significance of allometric control processes. For dynamic fractals, I will assemble reasons for attributing to them the capacity to adapt task execution to contextual changes across a range of scales. The final Section consists of general reflections on the implications of the reviewed data, and identifies what appear to be issues of fundamental importance for future research in the rapidly evolving topic of this review
    corecore