24 research outputs found

    Measuring symmetry, asymmetry and randomness in neural network connectivity

    Get PDF
    Cognitive functions are stored in the connectome, the wiring diagram of the brain, which exhibits non-random features, so-called motifs. In this work, we focus on bidirectional, symmetric motifs, i.e. two neurons that project to each other via connections of equal strength, and unidirectional, non-symmetric motifs, i.e. within a pair of neurons only one neuron projects to the other. We hypothesise that such motifs have been shaped via activity dependent synaptic plasticity processes. As a consequence, learning moves the distribution of the synaptic connections away from randomness. Our aim is to provide a global, macroscopic, single parameter characterisation of the statistical occurrence of bidirectional and unidirectional motifs. To this end we define a symmetry measure that does not require any a priori thresholding of the weights or knowledge of their maximal value. We calculate its mean and variance for random uniform or Gaussian distributions, which allows us to introduce a confidence measure of how significantly symmetric or asymmetric a specific configuration is, i.e. how likely it is that the configuration is the result of chance. We demonstrate the discriminatory power of our symmetry measure by inspecting the eigenvalues of different types of connectivity matrices. We show that a Gaussian weight distribution biases the connectivity motifs to more symmetric configurations than a uniform distribution and that introducing a random synaptic pruning, mimicking developmental regulation in synaptogenesis, biases the connectivity motifs to more asymmetric configurations, regardless of the distribution. We expect that our work will benefit the computational modelling community, by providing a systematic way to characterise symmetry and asymmetry in network structures. Further, our symmetry measure will be of use to electrophysiologists that investigate symmetry of network connectivity

    Reservoir computing for temporal data classification using a dynamic solid electrolyte ZnO thin film transistor

    Get PDF
    The processing of sequential and temporal data is essential to computer vision and speech recognition, two of the most common applications of artificial intelligence (AI). Reservoir computing (RC) is a branch of AI that offers a highly efficient framework for processing temporal inputs at a low training cost compared to conventional Recurrent Neural Networks (RNNs). However, despite extensive effort, two-terminal memristor-based reservoirs have, until now, been implemented to process sequential data by reading their conductance states only once, at the end of the entire sequence. This method reduces the dimensionality, related to the number of signals from the reservoir and thereby lowers the overall performance of reservoir systems. Higher dimensionality facilitates the separation of originally inseparable inputs by reading out from a larger set of spatiotemporal features of inputs. Moreover, memristor-based reservoirs either use multiple pulse rates, fast or slow read (immediately or with a delay introduced after the end of the sequence), or excitatory pulses to enhance the dimensionality of reservoir states. This adds to the complexity of the reservoir system and reduces power efficiency. In this paper, we demonstrate the first reservoir computing system based on a dynamic three terminal solid electrolyte ZnO/Ta2O5 Thin-film Transistor fabricated at less than 100°C. The inherent nonlinearity and dynamic memory of the device lead to a rich separation property of reservoir states that results in, to our knowledge, the highest accuracy of 94.44%, using electronic charge-based system, for the classification of hand-written digits. This improvement is attributed to an increase in the dimensionality of the reservoir by reading the reservoir states after each pulse rather than at the end of the sequence. The third terminal enables a read operation in the off state, that is when no pulse is applied at the gate terminal, via a small read pulse at the drain. This fundamentally allows multiple read operations without increasing energy consumption, which is not possible in the conventional two-terminal memristor counterpart. Further, we have also shown that devices do not saturate even after multiple write pulses which demonstrates the device’s ability to process longer sequences

    Fine-Tuning and the Stability of Recurrent Neural Networks

    Get PDF
    A central criticism of standard theoretical approaches to constructing stable, recurrent model networks is that the synaptic connection weights need to be finely-tuned. This criticism is severe because proposed rules for learning these weights have been shown to have various limitations to their biological plausibility. Hence it is unlikely that such rules are used to continuously fine-tune the network in vivo. We describe a learning rule that is able to tune synaptic weights in a biologically plausible manner. We demonstrate and test this rule in the context of the oculomotor integrator, showing that only known neural signals are needed to tune the weights. We demonstrate that the rule appropriately accounts for a wide variety of experimental results, and is robust under several kinds of perturbation. Furthermore, we show that the rule is able to achieve stability as good as or better than that provided by the linearly optimal weights often used in recurrent models of the integrator. Finally, we discuss how this rule can be generalized to tune a wide variety of recurrent attractor networks, such as those found in head direction and path integration systems, suggesting that it may be used to tune a wide variety of stable neural systems

    Adaptive Gain Modulation in V1 Explains Contextual Modifications during Bisection Learning

    Get PDF
    The neuronal processing of visual stimuli in primary visual cortex (V1) can be modified by perceptual training. Training in bisection discrimination, for instance, changes the contextual interactions in V1 elicited by parallel lines. Before training, two parallel lines inhibit their individual V1-responses. After bisection training, inhibition turns into non-symmetric excitation while performing the bisection task. Yet, the receptive field of the V1 neurons evaluated by a single line does not change during task performance. We present a model of recurrent processing in V1 where the neuronal gain can be modulated by a global attentional signal. Perceptual learning mainly consists in strengthening this attentional signal, leading to a more effective gain modulation. The model reproduces both the psychophysical results on bisection learning and the modified contextual interactions observed in V1 during task performance. It makes several predictions, for instance that imagery training should improve the performance, or that a slight stimulus wiggling can strongly affect the representation in V1 while performing the task. We conclude that strengthening a top-down induced gain increase can explain perceptual learning, and that this top-down signal can modify lateral interactions within V1, without significantly changing the classical receptive field of V1 neurons

    Democratic population decisions result in robust policy-gradient learning: A parametric study with GPU simulations

    Get PDF
    High performance computing on the Graphics Processing Unit (GPU) is an emerging field driven by the promise of high computational power at a low cost. However, GPU programming is a non-trivial task and moreover architectural limitations raise the question of whether investing effort in this direction may be worthwhile. In this work, we use GPU programming to simulate a two-layer network of Integrate-and-Fire neurons with varying degrees of recurrent connectivity and investigate its ability to learn a simplified navigation task using a policy-gradient learning rule stemming from Reinforcement Learning. The purpose of this paper is twofold. First, we want to support the use of GPUs in the field of Computational Neuroscience. Second, using GPU computing power, we investigate the conditions under which the said architecture and learning rule demonstrate best performance. Our work indicates that networks featuring strong Mexican-Hat-shaped recurrent connections in the top layer, where decision making is governed by the formation of a stable activity bump in the neural population (a "non-democratic" mechanism), achieve mediocre learning results at best. In absence of recurrent connections, where all neurons "vote" independently ("democratic") for a decision via population vector readout, the task is generally learned better and more robustly. Our study would have been extremely difficult on a desktop computer without the use of GPU programming. We present the routines developed for this purpose and show that a speed improvement of 5x up to 42x is provided versus optimised Python code. The higher speed is achieved when we exploit the parallelism of the GPU in the search of learning parameters. This suggests that efficient GPU programming can significantly reduce the time needed for simulating networks of spiking neurons, particularly when multiple parameter configurations are investigated. © 2011 Richmond et al

    Spike-Based Reinforcement Learning in Continuous State and Action Space: When Policy Gradient Methods Fail

    Get PDF
    Changes of synaptic connections between neurons are thought to be the physiological basis of learning. These changes can be gated by neuromodulators that encode the presence of reward. We study a family of reward-modulated synaptic learning rules for spiking neurons on a learning task in continuous space inspired by the Morris Water maze. The synaptic update rule modifies the release probability of synaptic transmission and depends on the timing of presynaptic spike arrival, postsynaptic action potentials, as well as the membrane potential of the postsynaptic neuron. The family of learning rules includes an optimal rule derived from policy gradient methods as well as reward modulated Hebbian learning. The synaptic update rule is implemented in a population of spiking neurons using a network architecture that combines feedforward input with lateral connections. Actions are represented by a population of hypothetical action cells with strong mexican-hat connectivity and are read out at theta frequency. We show that in this architecture, a standard policy gradient rule fails to solve the Morris watermaze task, whereas a variant with a Hebbian bias can learn the task within 20 trials, consistent with experiments. This result does not depend on implementation details such as the size of the neuronal populations. Our theoretical approach shows how learning new behaviors can be linked to reward-modulated plasticity at the level of single synapses and makes predictions about the voltage and spike-timing dependence of synaptic plasticity and the influence of neuromodulators such as dopamine. It is an important step towards connecting formal theories of reinforcement learning with neuronal and synaptic properties

    An Ensemble Analysis of Electromyographic Activity during Whole Body Pointing with the Use of Support Vector Machines

    Get PDF
    We explored the use of support vector machines (SVM) in order to analyze the ensemble activities of 24 postural and focal muscles recorded during a whole body pointing task. Because of the large number of variables involved in motor control studies, such multivariate methods have much to offer over the standard univariate techniques that are currently employed in the field to detect modifications. The SVM was used to uncover the principle differences underlying several variations of the task. Five variants of the task were used. An unconstrained reaching, two constrained at the focal level and two at the postural level. Using the electromyographic (EMG) data, the SVM proved capable of distinguishing all the unconstrained from the constrained conditions with a success of approximately 80% or above. In all cases, including those with focal constraints, the collective postural muscle EMGs were as good as or better than those from focal muscles for discriminating between conditions. This was unexpected especially in the case with focal constraints. In trying to rank the importance of particular features of the postural EMGs we found the maximum amplitude rather than the moment at which it occurred to be more discriminative. A classification using the muscles one at a time permitted us to identify some of the postural muscles that are significantly altered between conditions. In this case, the use of a multivariate method also permitted the use of the entire muscle EMG waveform rather than the difficult process of defining and extracting any particular variable. The best accuracy was obtained from muscles of the leg rather than from the trunk. By identifying the features that are important in discrimination, the use of the SVM permitted us to identify some of the features that are adapted when constraints are placed on a complex motor task

    Tag-Trigger-Consolidation: A Model of Early and Late Long-Term-Potentiation and Depression

    Get PDF
    Changes in synaptic efficacies need to be long-lasting in order to serve as a substrate for memory. Experimentally, synaptic plasticity exhibits phases covering the induction of long-term potentiation and depression (LTP/LTD) during the early phase of synaptic plasticity, the setting of synaptic tags, a trigger process for protein synthesis, and a slow transition leading to synaptic consolidation during the late phase of synaptic plasticity. We present a mathematical model that describes these different phases of synaptic plasticity. The model explains a large body of experimental data on synaptic tagging and capture, cross-tagging, and the late phases of LTP and LTD. Moreover, the model accounts for the dependence of LTP and LTD induction on voltage and presynaptic stimulation frequency. The stabilization of potentiated synapses during the transition from early to late LTP occurs by protein synthesis dynamics that are shared by groups of synapses. The functional consequence of this shared process is that previously stabilized patterns of strong or weak synapses onto the same postsynaptic neuron are well protected against later changes induced by LTP/LTD protocols at individual synapses

    Abstract concept learning in a simple neural network inspired by the insect brain

    Get PDF
    The capacity to learn abstract concepts such as 'sameness' and 'difference' is considered a higher-order cognitive function, typically thought to be dependent on top-down neocortical processing. It is therefore surprising that honey bees apparantly have this capacity. Here we report a model of the structures of the honey bee brain that can learn sameness and difference, as well as a range of complex and simple associative learning tasks. Our model is constrained by the known connections and properties of the mushroom body, including the protocerebral tract, and provides a good fit to the learning rates and performances of real bees in all tasks, including learning sameness and difference. The model proposes a novel mechanism for learning the abstract concepts of 'sameness' and 'difference' that is compatible with the insect brain, and is not dependent on top-down or executive control processing

    A biophysical model of endocannabinoid-mediated short term depression in hippocampal inhibition

    Get PDF
    Memories are believed to be represented in the synaptic pathways of vastly interconnected networks of neurons. The plasticity of synapses, that is, their strengthening and weakening depending on neuronal activity, is believed to be the basis of learning and establishing memories. An increasing number of studies indicate that endocannabinoids have a widespread action on brain function through modulation of synap–tic transmission and plasticity. Recent experimental studies have characterised the role of endocannabinoids in mediating both short- and long-term synaptic plasticity in various brain regions including the hippocampus, a brain region strongly associated with cognitive functions, such as learning and memory. Here, we present a biophysically plausible model of cannabinoid retrograde signalling at the synaptic level and investigate how this signalling mediates depolarisation induced suppression of inhibition (DSI), a prominent form of shortterm synaptic depression in inhibitory transmission in hippocampus. The model successfully captures many of the key characteristics of DSI in the hippocampus, as observed experimentally, with a minimal yet sufficient mathematical description of the major signalling molecules and cascades involved. More specifically, this model serves as a framework to test hypotheses on the factors determining the variability of DSI and investigate under which conditions it can be evoked. The model reveals the frequency and duration bands in which the post-synaptic cell can be sufficiently stimulated to elicit DSI. Moreover, the model provides key insights on how the state of the inhibitory cell modulates DSI according to its firing rate and relative timing to the post-synaptic activation. Thus, it provides concrete suggestions to further investigate experimentally how DSI modulates and is modulated by neuronal activity in the brain. Importantly, this model serves as a stepping stone for future deciphering of the role of endocannabinoids in synaptic transmission as a feedback mechanism both at synaptic and network level
    corecore