8 research outputs found

    Wide learning: Using an ensemble of biologically-plausible spiking neural networks for unsupervised parallel classification of spatio-temporal patterns

    Get PDF
    Spiking neural networks have been previously used to perform tasks such as object recognition without supervision. One of the concerns relating to the spiking neural networks is their speed of operation and the number of iterations necessary to train and use the network. Here, we propose a biologically plausible model of a spiking neural network which is used in multiple, separately trained copies to process subsets of data in parallel. This ensemble of networks is tested by applying it to the task of unsupervised classification of spatio-temporal patterns. Results show that despite different starting weights and independent training, the networks produce highly similar spiking patterns in response to the same class of inputs, enabling classification with fast training time

    Ensembles of Compact, Region-specific & Regularized Spiking Neural Networks for Scalable Place Recognition

    Full text link
    Spiking neural networks have significant potential utility in robotics due to their high energy efficiency on specialized hardware, but proof-of-concept implementations have not yet typically achieved competitive performance or capability with conventional approaches. In this paper, we tackle one of the key practical challenges of scalability by introducing a novel modular ensemble network approach, where compact, localized spiking networks each learn and are solely responsible for recognizing places in a local region of the environment only. This modular approach creates a highly scalable system. However, it comes with a high-performance cost where a lack of global regularization at deployment time leads to hyperactive neurons that erroneously respond to places outside their learned region. Our second contribution introduces a regularization approach that detects and removes these problematic hyperactive neurons during the initial environmental learning phase. We evaluate this new scalable modular system on benchmark localization datasets Nordland and Oxford RobotCar, with comparisons to both standard techniques NetVLAD and SAD, and a previous spiking neural network system. Our system substantially outperforms the previous SNN system on its small dataset, but also maintains performance on 27 times larger benchmark datasets where the operation of the previous system is computationally infeasible, and performs competitively with the conventional localization systems.Comment: 8 pages, 6 figures, under revie

    Modulation of Spike-Timing Dependent Plasticity: Towards the Inclusion of a Third Factor in Computational Models

    Get PDF
    In spike-timing dependent plasticity (STDP) change in synaptic strength depends on the timing of pre- vs. postsynaptic spiking activity. Since STDP is in compliance with Hebb’s postulate, it is considered one of the major mechanisms of memory storage and recall. STDP comprises a system of two coincidence detectors with N-methyl-D-aspartate receptor (NMDAR) activation often posited as one of the main components. Numerous studies have unveiled a third component of this coincidence detection system, namely neuromodulation and glia activity shaping STDP. Even though dopaminergic control of STDP has most often been reported, acetylcholine, noradrenaline, nitric oxide (NO), brain-derived neurotrophic factor (BDNF) or gamma-aminobutyric acid (GABA) also has been shown to effectively modulate STDP. Furthermore, it has been demonstrated that astrocytes, via the release or uptake of glutamate, gate STDP expression. At the most fundamental level, the timing properties of STDP are expected to depend on the spatiotemporal dynamics of the underlying signaling pathways. However in most cases, due to technical limitations experiments grant only indirect access to these pathways. Computational models carefully constrained by experiments, allow for a better qualitative understanding of the molecular basis of STDP and its regulation by neuromodulators. Recently, computational models of calcium dynamics and signaling pathway molecules have started to explore STDP emergence in ex and in vivo-like conditions. These models are expected to reproduce better at least part of the complex modulation of STDP as an emergent property of the underlying molecular pathways. Elucidation of the mechanisms underlying STDP modulation and its consequences on network dynamics is of critical importance and will allow better understanding of the major mechanisms of memory storage and recall both in health and disease

    Bio-mimetic Spiking Neural Networks for unsupervised clustering of spatio-temporal data

    Get PDF
    Spiking neural networks aspire to mimic the brain more closely than traditional artificial neural networks. They are characterised by a spike-like activation function inspired by the shape of an action potential in biological neurons. Spiking networks remain a niche area of research, perform worse than the traditional artificial networks, and their real-world applications are limited. We hypothesised that neuroscience-inspired spiking neural networks with spike-timing-dependent plasticity demonstrate useful learning capabilities. Our objective was to identify features which play a vital role in information processing in the brain but are not commonly used in artificial networks, implement them in spiking networks without copying constraints that apply to living organisms, and to characterise their effect on data processing. The networks we created are not brain models; our approach can be labelled as artificial life. We performed a literature review and selected features such as local weight updates, neuronal sub-types, modularity, homeostasis and structural plasticity. We used the review as a guide for developing the consecutive iterations of the network, and eventually a whole evolutionary developmental system. We analysed the model’s performance on clustering of spatio-temporal data. Our results show that combining evolution and unsupervised learning leads to a faster convergence on the optimal solutions, better stability of fit solutions than each approach separately. The choice of fitness definition affects the network’s performance on fitness-related and unrelated tasks. We found that neuron type-specific weight homeostasis can be used to stabilise the networks, thus enabling longer training. We also demonstrated that networks with a rudimentary architecture can evolve developmental rules which improve their fitness. This interdisciplinary work provides contributions to three fields: it proposes novel artificial intelligence approaches, tests the possible role of the selected biological phenomena in information processing in the brain, and explores the evolution of learning in an artificial life system

    Neuromorphic Engineering Editors' Pick 2021

    Get PDF
    This collection showcases well-received spontaneous articles from the past couple of years, which have been specially handpicked by our Chief Editors, Profs. André van Schaik and Bernabé Linares-Barranco. The work presented here highlights the broad diversity of research performed across the section and aims to put a spotlight on the main areas of interest. All research presented here displays strong advances in theory, experiment, and methodology with applications to compelling problems. This collection aims to further support Frontiers’ strong community by recognizing highly deserving authors

    The Evolution, Analysis, and Design of Minimal Spiking Neural Networks for Temporal Pattern Recognition

    Get PDF
    All sensory stimuli are temporal in structure. How a pattern of action potentials encodes the information received from the sensory stimuli is an important research question in neurosciencce. Although it is clear that information is carried by the number or the timing of spikes, the information processing in the nervous system is poorly understood. The desire to understand information processing in the animal brain led to the development of spiking neural networks (SNNs). Understanding information processing in spiking neural networks may give us an insight into the information processing in the animal brain. One way to understand the mechanisms which enable SNNs to perform a computational task is to associate the structural connectivity of the network with the corresponding functional behaviour. This work demonstrates the structure-function mapping of spiking networks evolved (or handcrafted) for recognising temporal patterns. The SNNs are composed of simple yet biologically meaningful adaptive exponential integrate-and-fire (AdEx) neurons. The computational task can be described as identifying a subsequence of three signals (say ABC) in a random input stream of signals ("ABBBCCBABABCBBCAC"). The topology and connection weights of the networks are optimised using a genetic algorithm such that the network output spikes only for the correct input pattern and remains silent for all others. The fitness function rewards the network output for spiking after receiving the correct pattern and penalises spikes elsewhere. To analyse the effect of noise, two types of noise are introduced during evolution: (i) random fluctuations of the membrane potential of neurons in the network at every network step, (ii) random variations of the duration of the silent interval between input signals. It has been observed that evolution in the presence of noise produced networks that were robust to perturbation of neuronal parameters. Moreover, the networks also developed a form of memory, enabling them to maintain network states in the absence of input activity. It has been demonstrated that the network states of an evolved network have a one-to-one correspondence with the states of a finite-state transducer (FST) { a model of computation for time-structured data. The analysis of networks indicated that the task of recognition is accomplished by transitions between network states. Evolution may overproduce synaptic connections, pruning these superfluous connections pronounced structural similarities among individuals obtained from different independent runs. Moreover, the analysis of the pruned networks highlighted that memory is a property of self-excitation in the network. Neurons with self-excitatory loops (also called autapses) could sustain spiking activity indefinitely in the absence of input activity. To recognise a pattern of length n, a network requires n+1 network states, where n states are maintained actively with autapses and the penultimate state is maintained passively by no activity in the network. Simultaneously, the role of other connections in the network is identified. Of particular interest, three interneurons in the network are found to have a specialized role: (i) the lock neuron is always active, preventing the output from spiking unless it is released by the penultimate signal in the correct pattern, exposing the output neuron to spike for the correct last signal, (ii) the switch neuron is responsible for switching the network between the inter-signal states and the start state, and (iii) the accept neuron produces spikes in the output neuron when the network receives the last correct input. It also sends a signal to the switch neuron, transforming the network back into the start state Understanding how information is processed in the evolved networks led to handcrafting network topologies for recognising more extended patterns. The proposed rules can extend network topologies to recognize temporal patterns up to length six. To validate the handcrafted topology, a genetic algorithm is used to optimise its connection weights. It has been observed that the maximum number of active neurons representing a state in the network increases with the pattern length. Therefore, the suggested rules can handcraft network topologies only up to length 6. Handcrafting network topologies, representing a network state with a fixed number of active neurons requires further investigation
    corecore