1,222 research outputs found

    Scaling of a large-scale simulation of synchronous slow-wave and asynchronous awake-like activity of a cortical model with long-range interconnections

    Full text link
    Cortical synapse organization supports a range of dynamic states on multiple spatial and temporal scales, from synchronous slow wave activity (SWA), characteristic of deep sleep or anesthesia, to fluctuating, asynchronous activity during wakefulness (AW). Such dynamic diversity poses a challenge for producing efficient large-scale simulations that embody realistic metaphors of short- and long-range synaptic connectivity. In fact, during SWA and AW different spatial extents of the cortical tissue are active in a given timespan and at different firing rates, which implies a wide variety of loads of local computation and communication. A balanced evaluation of simulation performance and robustness should therefore include tests of a variety of cortical dynamic states. Here, we demonstrate performance scaling of our proprietary Distributed and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and AW for bidimensional grids of neural populations, which reflects the modular organization of the cortex. We explored networks up to 192x192 modules, each composed of 1250 integrate-and-fire neurons with spike-frequency adaptation, and exponentially decaying inter-modular synaptic connectivity with varying spatial decay constant. For the largest networks the total number of synapses was over 70 billion. The execution platform included up to 64 dual-socket nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40GHz clock rates. Network initialization time, memory usage, and execution time showed good scaling performances from 1 to 1024 processes, implemented using the standard Message Passing Interface (MPI) protocol. We achieved simulation speeds of between 2.3x10^9 and 4.1x10^9 synaptic events per second for both cortical states in the explored range of inter-modular interconnections.Comment: 22 pages, 9 figures, 4 table

    Excitatory, Inhibitory, and Structural Plasticity Produce Correlated Connectivity in Random Networks Trained to Solve Paired-Stimulus Tasks

    Get PDF
    The pattern of connections among cortical excitatory cells with overlapping arbors is non-random. In particular, correlations among connections produce clustering – cells in cliques connect to each other with high probability, but with lower probability to cells in other spatially intertwined cliques. In this study, we model initially randomly connected sparse recurrent networks of spiking neurons with random, overlapping inputs, to investigate what functional and structural synaptic plasticity mechanisms sculpt network connections into the patterns measured in vitro. Our Hebbian implementation of structural plasticity causes a removal of connections between uncorrelated excitatory cells, followed by their random replacement. To model a biconditional discrimination task, we stimulate the network via pairs (A + B, C + D, A + D, and C + B) of four inputs (A, B, C, and D). We find networks that produce neurons most responsive to specific paired inputs – a building block of computation and essential role for cortex – contain the excessive clustering of excitatory synaptic connections observed in cortical slices. The same networks produce the best performance in a behavioral readout of the networks’ ability to complete the task. A plasticity mechanism operating on inhibitory connections, long-term potentiation of inhibition, when combined with structural plasticity, indirectly enhances clustering of excitatory cells via excitatory connections. A rate-dependent (triplet) form of spike-timing-dependent plasticity (STDP) between excitatory cells is less effective and basic STDP is detrimental. Clustering also arises in networks stimulated with single stimuli and in networks undergoing raised levels of spontaneous activity when structural plasticity is combined with functional plasticity. In conclusion, spatially intertwined clusters or cliques of connected excitatory cells can arise via a Hebbian form of structural plasticity operating in initially randomly connected networks

    Scaling of a large-scale simulation of synchronous slow-wave and asynchronous awake-like activity of a cortical model with long-range interconnections

    Full text link
    Cortical synapse organization supports a range of dynamic states on multiple spatial and temporal scales, from synchronous slow wave activity (SWA), characteristic of deep sleep or anesthesia, to fluctuating, asynchronous activity during wakefulness (AW). Such dynamic diversity poses a challenge for producing efficient large-scale simulations that embody realistic metaphors of short- and long-range synaptic connectivity. In fact, during SWA and AW different spatial extents of the cortical tissue are active in a given timespan and at different firing rates, which implies a wide variety of loads of local computation and communication. A balanced evaluation of simulation performance and robustness should therefore include tests of a variety of cortical dynamic states. Here, we demonstrate performance scaling of our proprietary Distributed and Plastic Spiking Neural Networks (DPSNN) simulation engine in both SWA and AW for bidimensional grids of neural populations, which reflects the modular organization of the cortex. We explored networks up to 192x192 modules, each composed of 1250 integrate-and-fire neurons with spike-frequency adaptation, and exponentially decaying inter-modular synaptic connectivity with varying spatial decay constant. For the largest networks the total number of synapses was over 70 billion. The execution platform included up to 64 dual-socket nodes, each socket mounting 8 Intel Xeon Haswell processor cores @ 2.40GHz clock rates. Network initialization time, memory usage, and execution time showed good scaling performances from 1 to 1024 processes, implemented using the standard Message Passing Interface (MPI) protocol. We achieved simulation speeds of between 2.3x10^9 and 4.1x10^9 synaptic events per second for both cortical states in the explored range of inter-modular interconnections.Comment: 22 pages, 9 figures, 4 table

    Regulation of circuit organization and function through inhibitory synaptic plasticity

    Get PDF
    Diverse inhibitory neurons in the mammalian brain shape circuit connectivity and dynamics through mechanisms of synaptic plasticity. Inhibitory plasticity can establish excitation/inhibition (E/I) balance, control neuronal firing, and affect local calcium concentration, hence regulating neuronal activity at the network, single neuron, and dendritic level. Computational models can synthesize multiple experimental results and provide insight into how inhibitory plasticity controls circuit dynamics and sculpts connectivity by identifying phenomenological learning rules amenable to mathematical analysis. We highlight recent studies on the role of inhibitory plasticity in modulating excitatory plasticity, forming structured networks underlying memory formation and recall, and implementing adaptive phenomena and novelty detection. We conclude with experimental and modeling progress on the role of interneuron-specific plasticity in circuit computation and context-dependent learning

    A Computational Investigation of Neural Dynamics and Network Structure

    No full text
    With the overall goal of illuminating the relationship between neural dynamics and neural network structure, this thesis presents a) a computer model of a network infrastructure capable of global broadcast and competition, and b) a study of various convergence properties of spike-timing dependent plasticity (STDP) in a recurrent neural network. The first part of the thesis explores the parameter space of a possible Global Neuronal Workspace (GNW) realised in a novel computational network model using stochastic connectivity. The structure of this model is analysed in light of the characteristic dynamics of a GNW: broadcast, reverberation, and competition. It is found even with careful consideration of the balance between excitation and inhibition, the structural choices do not allow agreement with the GNW dynamics, and the implications of this are addressed. An additional level of competition – access competition – is added, discussed, and found to be more conducive to winner-takes-all competition. The second part of the thesis investigates the formation of synaptic structure due to neural and synaptic dynamics. From previous theoretical and modelling work, it is predicted that homogeneous stimulation in a recurrent neural network with STDP will create a self-stabilising equilibrium amongst synaptic weights, while heterogeneous stimulation will induce structured synaptic changes. A new factor in modulating the synaptic weight equilibrium is suggested from the experimental evidence presented: anti-correlation due to inhibitory neurons. It is observed that the synaptic equilibrium creates competition amongst synapses, and those specifically stimulated during heterogeneous stimulation win out. Further investigation is carried out in order to assess the effect that more complex STDP rules would have on synaptic dynamics, varying parameters of a trace STDP model. There is little qualitative effect on synaptic dynamics under low frequency (< 25Hz) conditions, justifying the use of simple STDP until further experimental or theoretical evidence suggests otherwise

    Balanced neural architecture and the idling brain

    Get PDF
    A signature feature of cortical spike trains is their trial-to-trial variability. This variability is large in the spontaneous state and is reduced when cortex is driven by a stimulus or task. Models of recurrent cortical networks with unstructured, yet balanced, excitation and inhibition generate variability consistent with evoked conditions. However, these models produce spike trains which lack the long timescale fluctuations and large variability exhibited during spontaneous cortical dynamics. We propose that global network architectures which support a large number of stable states (attractor networks) allow balanced networks to capture key features of neural variability in both spontaneous and evoked conditions. We illustrate this using balanced spiking networks with clustered assembly, feedforward chain, and ring structures. By assuming that global network structure is related to stimulus preference, we show that signal correlations are related to the magnitude of correlations in the spontaneous state. Finally, we contrast the impact of stimulation on the trial-to-trial variability in attractor networks with that of strongly coupled spiking networks with chaotic firing rate instabilities, recently investigated by Ostojic (2014). We find that only attractor networks replicate an experimentally observed stimulus-induced quenching of trial-to-trial variability. In total, the comparison of the trial-variable dynamics of single neurons or neuron pairs during spontaneous and evoked activity can be a window into the global structure of balanced cortical networks. © 2014 Doiron and Litwin-Kumar

    Synaptic Plasticity and Hebbian Cell Assemblies

    Get PDF
    Synaptic dynamics are critical to the function of neuronal circuits on multiple timescales. In the first part of this dissertation, I tested the roles of action potential timing and NMDA receptor composition in long-term modifications to synaptic efficacy. In a computational model I showed that the dynamics of the postsynaptic [Ca2+] time course can be used to map the timing of pre- and postsynaptic action potentials onto experimentally observed changes in synaptic strength. Using dual patch-clamp recordings from cultured hippocampal neurons, I found that NMDAR subtypes can map combinations of pre- and postsynaptic action potentials onto either long-term potentiation (LTP) or depression (LTD). LTP and LTD could even be evoked by the same stimuli, and in such cases the plasticity outcome was determined by the availability of NMDAR subtypes. The expression of LTD was increasingly presynaptic as synaptic connections became more developed. Finally, I found that spike-timing-dependent potentiability is history-dependent, with a non-linear relationship to the number of pre- and postsynaptic action potentials. After LTP induction, subsequent potentiability recovered on a timescale of minutes, and was dependent on the duration of the previous induction. While activity-dependent plasticity is putatively involved in circuit development, I found that it was not required to produce small networks capable of exhibiting rhythmic persistent activity patterns called reverberations. However, positive synaptic scaling produced by network inactivity yielded increased quantal synaptic amplitudes, connectivity, and potentiability, all favoring reverberation. These data suggest that chronic inactivity upregulates synaptic efficacy by both quantal amplification and by the addition of silent synapses, the latter of which are rapidly activated by reverberation. Reverberation in previously inactivated networks also resulted in activity-dependent outbreaks of spontaneous network activity. Applying a model of short-term synaptic dynamics to the network level, I argue that these experimental observations can be explained by the interaction between presynaptic calcium dynamics and short-term synaptic depression on multiple timescales. Together, the experiments and modeling indicate that ongoing activity, synaptic scaling and metaplasticity are required to endow networks with a level of synaptic connectivity and potentiability that supports stimulus-evoked persistent activity patterns but avoids spontaneous activity

    The influence of dopamine on prediction, action and learning

    Get PDF
    In this thesis I explore functions of the neuromodulator dopamine in the context of autonomous learning and behaviour. I first investigate dopaminergic influence within a simulated agent-based model, demonstrating how modulation of synaptic plasticity can enable reward-mediated learning that is both adaptive and self-limiting. I describe how this mechanism is driven by the dynamics of agentenvironment interaction and consequently suggest roles for both complex spontaneous neuronal activity and specific neuroanatomy in the expression of early, exploratory behaviour. I then show how the observed response of dopamine neurons in the mammalian basal ganglia may also be modelled by similar processes involving dopaminergic neuromodulation and cortical spike-pattern representation within an architecture of counteracting excitatory and inhibitory neural pathways, reflecting gross mammalian neuroanatomy. Significantly, I demonstrate how combined modulation of synaptic plasticity and neuronal excitability enables specific (timely) spike-patterns to be recognised and selectively responded to by efferent neural populations, therefore providing a novel spike-timing based implementation of the hypothetical ‘serial-compound’ representation suggested by temporal difference learning. I subsequently discuss more recent work, focused upon modelling those complex spike-patterns observed in cortex. Here, I describe neural features likely to contribute to the expression of such activity and subsequently present novel simulation software allowing for interactive exploration of these factors, in a more comprehensive neural model that implements both dynamical synapses and dopaminergic neuromodulation. I conclude by describing how the work presented ultimately suggests an integrated theory of autonomous learning, in which direct coupling of agent and environment supports a predictive coding mechanism, bootstrapped in early development by a more fundamental process of trial-and-error learning

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and Fundación BBVA
    corecore