13,776 research outputs found

    Effects of Random External Background Stimulation on Network Synaptic Stability After Tetanization: A Modeling Study

    Get PDF
    We constructed a simulated spiking neural network model to investigate the effects of random background stimulation on the dynamics of network activity patterns and tetanus induced network plasticity. The simulated model was a “leaky integrate-and-fire” (LIF) neural model with spike-timing-dependent plasticity (STDP) and frequency-dependent synaptic depression. Spontaneous and evoked activity patterns were compared with those of living neuronal networks cultured on multielectrode arrays. To help visualize activity patterns and plasticity in our simulated model, we introduced new population measures called Center of Activity (CA) and Center of Weights (CW) to describe the spatio-temporal dynamics of network-wide firing activity and network-wide synaptic strength, respectively. Without random background stimulation, the network synaptic weights were unstable and often drifted after tetanization. In contrast, with random background stimulation, the network synaptic weights remained close to their values immediately after tetanization. The simulation suggests that the effects of tetanization on network synaptic weights were difficult to control because of ongoing synchronized spontaneous bursts of action potentials, or “barrages.” Random background stimulation helped maintain network synaptic stability after tetanization by reducing the number and thus the influence of spontaneous barrages. We used our simulated network to model the interaction between ongoing neural activity, external stimulation and plasticity, and to guide our choice of sensory-motor mappings for adaptive behavior in hybrid neural-robotic systems or “hybrots.

    Neural networks with dynamical synapses: from mixed-mode oscillations and spindles to chaos

    Full text link
    Understanding of short-term synaptic depression (STSD) and other forms of synaptic plasticity is a topical problem in neuroscience. Here we study the role of STSD in the formation of complex patterns of brain rhythms. We use a cortical circuit model of neural networks composed of irregular spiking excitatory and inhibitory neurons having type 1 and 2 excitability and stochastic dynamics. In the model, neurons form a sparsely connected network and their spontaneous activity is driven by random spikes representing synaptic noise. Using simulations and analytical calculations, we found that if the STSD is absent, the neural network shows either asynchronous behavior or regular network oscillations depending on the noise level. In networks with STSD, changing parameters of synaptic plasticity and the noise level, we observed transitions to complex patters of collective activity: mixed-mode and spindle oscillations, bursts of collective activity, and chaotic behaviour. Interestingly, these patterns are stable in a certain range of the parameters and separated by critical boundaries. Thus, the parameters of synaptic plasticity can play a role of control parameters or switchers between different network states. However, changes of the parameters caused by a disease may lead to dramatic impairment of ongoing neural activity. We analyze the chaotic neural activity by use of the 0-1 test for chaos (Gottwald, G. & Melbourne, I., 2004) and show that it has a collective nature.Comment: 7 pages, Proceedings of 12th Granada Seminar, September 17-21, 201

    Spiking Neural Networks for Inference and Learning: A Memristor-based Design Perspective

    Get PDF
    On metrics of density and power efficiency, neuromorphic technologies have the potential to surpass mainstream computing technologies in tasks where real-time functionality, adaptability, and autonomy are essential. While algorithmic advances in neuromorphic computing are proceeding successfully, the potential of memristors to improve neuromorphic computing have not yet born fruit, primarily because they are often used as a drop-in replacement to conventional memory. However, interdisciplinary approaches anchored in machine learning theory suggest that multifactor plasticity rules matching neural and synaptic dynamics to the device capabilities can take better advantage of memristor dynamics and its stochasticity. Furthermore, such plasticity rules generally show much higher performance than that of classical Spike Time Dependent Plasticity (STDP) rules. This chapter reviews the recent development in learning with spiking neural network models and their possible implementation with memristor-based hardware

    Critical dynamics in homeostatic memory networks

    Get PDF
    Critical behavior in neural networks characterized by scale-free event distributions and brought about by self-regulatory mechanisms such as short-term synaptic dynamics or homeostatic plasticity, is believed to optimize sensitivity to input and information transfer in the system. Although theoretical predictions of the spike distributions have been confirmed by in-vitro experiments, in-vivo data yield a more complex picture which might be due to the in-homogeneity of the network structure, leakage in currents or massive driving inputs which has so far not been comprehensively covered by analytical or numerical studies.

We address these questions by the study of a neural model of memory that allows for storage and retrieval of patterns and for recombining such patterns as needed for search in problem solving. The model features critical dynamics in the neural assembly as a result of the interplay of synaptic depression and facilitation (Levina e.a 2007, 2009). Model simulations show that the prolonged consolidation of memory patterns induces a bias towards the memories which affects the scale-free spike-frequency distribution. However, selective modification of neuronal circuitry in the form of controlled homeostatic regulation in the form of recalibration of the synaptic weights towards the critical value preserved criticality although characterized by fluctuations between learned random patterns, as observed by the dynamics of stored pattern retrieval quality. The resulting spike statistics depends on the assumed coding scheme, but even sparse or orthogonal memory patterns introduce a typical event size which is incompatible with critical dynamics below the maximal memory capacity. Specifically results obtained for de-correlated patterns show an immediate jump from the sub-critical regime to a state of super-criticality in contrast to a more structured wave-like formation in the avalanche dynamics obtained from a general set of random patterns, pointing towards an eventual evolution of the network connectivity and the optimization of the critical regime. Specifically results obtained for de-correlated patterns show an immediate jump from the sub-critical regime to a state of super-criticality in contrast to a more structured wave-like formation in the avalanche dynamics obtained from a general set of random patterns, pointing towards an eventual evolution of the network connectivity and the optimization of the critical regime (Pearlmutter and Houghton, 2009).

The combination of memory and ongoing dynamics in the model was chosen for its implications in the context of cognitive aging. Following the paradigm of aging as a multi-criteria optimization process, we posit aging effects as a result of an increasing incompatibility of learning goals. In aging, a shift from fluid intelligence (flexibility to recombine memory content) towards crystalline intelligence (optimal memory organization) appears as a lifelong trend against the general decrease of resources. We show that in young age memory and criticality can be maintained simultaneously by a homeostatic leveling of the synaptic conductances. This balance is lost in the aging brain where the memory attractors cannot be kept sufficiently shallow due to neural and synaptic loss, a reduction of activity while experiencing a growth in memories. The value of the memory organization is therefore protected on the cost of the partial loss of the capability of recombining memory patterns in a task-dependent way

    A three-threshold learning rule approaches the maximal capacity of recurrent neural networks

    Get PDF
    Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model has a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns.Comment: 24 pages, 10 figures, to be published in PLOS Computational Biolog

    Clique of functional hubs orchestrates population bursts in developmentally regulated neural networks

    Full text link
    It has recently been discovered that single neuron stimulation can impact network dynamics in immature and adult neuronal circuits. Here we report a novel mechanism which can explain in neuronal circuits, at an early stage of development, the peculiar role played by a few specific neurons in promoting/arresting the population activity. For this purpose, we consider a standard neuronal network model, with short-term synaptic plasticity, whose population activity is characterized by bursting behavior. The addition of developmentally inspired constraints and correlations in the distribution of the neuronal connectivities and excitabilities leads to the emergence of functional hub neurons, whose stimulation/deletion is critical for the network activity. Functional hubs form a clique, where a precise sequential activation of the neurons is essential to ignite collective events without any need for a specific topological architecture. Unsupervised time-lagged firings of supra-threshold cells, in connection with coordinated entrainments of near-threshold neurons, are the key ingredients to orchestrateComment: 39 pages, 15 figures, to appear in PLOS Computational Biolog
    corecore