3,569 research outputs found

    Dopaminergic Regulation of Neuronal Circuits in Prefrontal Cortex

    Get PDF
    Neuromodulators, like dopamine, have considerable influence on the\ud processing capabilities of neural networks. \ud This has for instance been shown in the working memory functions\ud of prefrontal cortex, which may be regulated by altering the\ud dopamine level. Experimental work provides evidence on the biochemical\ud and electrophysiological actions of dopamine receptors, but there are few \ud theories concerning their significance for computational properties \ud (ServanPrintzCohen90,Hasselmo94).\ud We point to experimental data on neuromodulatory regulation of \ud temporal properties of excitatory neurons and depolarization of inhibitory \ud neurons, and suggest computational models employing these effects.\ud Changes in membrane potential may be modelled by the firing threshold,\ud and temporal properties by a parameterization of neuronal responsiveness \ud according to the preceding spike interval.\ud We apply these concepts to two examples using spiking neural networks.\ud In the first case, there is a change in the input synchronization of\ud neuronal groups, which leads to\ud changes in the formation of synchronized neuronal ensembles.\ud In the second case, the threshold\ud of interneurons influences lateral inhibition, and the switch from a \ud winner-take-all network to a parallel feedforward mode of processing.\ud Both concepts are interesting for the modeling of cognitive functions and may\ud have explanatory power for behavioral changes associated with dopamine \ud regulation

    Feed-Forward Propagation of Temporal and Rate Information between Cortical Populations during Coherent Activation in Engineered In Vitro Networks.

    Get PDF
    Transient propagation of information across neuronal assembles is thought to underlie many cognitive processes. However, the nature of the neural code that is embedded within these transmissions remains uncertain. Much of our understanding of how information is transmitted among these assemblies has been derived from computational models. While these models have been instrumental in understanding these processes they often make simplifying assumptions about the biophysical properties of neurons that may influence the nature and properties expressed. To address this issue we created an in vitro analog of a feed-forward network composed of two small populations (also referred to as assemblies or layers) of living dissociated rat cortical neurons. The populations were separated by, and communicated through, a microelectromechanical systems (MEMS) device containing a strip of microscale tunnels. Delayed culturing of one population in the first layer followed by the second a few days later induced the unidirectional growth of axons through the microtunnels resulting in a primarily feed-forward communication between these two small neural populations. In this study we systematically manipulated the number of tunnels that connected each layer and hence, the number of axons providing communication between those populations. We then assess the effect of reducing the number of tunnels has upon the properties of between-layer communication capacity and fidelity of neural transmission among spike trains transmitted across and within layers. We show evidence based on Victor-Purpura's and van Rossum's spike train similarity metrics supporting the presence of both rate and temporal information embedded within these transmissions whose fidelity increased during communication both between and within layers when the number of tunnels are increased. We also provide evidence reinforcing the role of synchronized activity upon transmission fidelity during the spontaneous synchronized network burst events that propagated between layers and highlight the potential applications of these MEMs devices as a tool for further investigation of structure and functional dynamics among neural populations

    Coordinated neuronal ensembles in primary auditory cortical columns.

    Get PDF
    The synchronous activity of groups of neurons is increasingly thought to be important in cortical information processing and transmission. However, most studies of processing in the primary auditory cortex (AI) have viewed neurons as independent filters; little is known about how coordinated AI neuronal activity is expressed throughout cortical columns and how it might enhance the processing of auditory information. To address this, we recorded from populations of neurons in AI cortical columns of anesthetized rats and, using dimensionality reduction techniques, identified multiple coordinated neuronal ensembles (cNEs), which are groups of neurons with reliable synchronous activity. We show that cNEs reflect local network configurations with enhanced information encoding properties that cannot be accounted for by stimulus-driven synchronization alone. Furthermore, similar cNEs were identified in both spontaneous and evoked activity, indicating that columnar cNEs are stable functional constructs that may represent principal units of information processing in AI

    Topological exploration of artificial neuronal network dynamics

    Full text link
    One of the paramount challenges in neuroscience is to understand the dynamics of individual neurons and how they give rise to network dynamics when interconnected. Historically, researchers have resorted to graph theory, statistics, and statistical mechanics to describe the spatiotemporal structure of such network dynamics. Our novel approach employs tools from algebraic topology to characterize the global properties of network structure and dynamics. We propose a method based on persistent homology to automatically classify network dynamics using topological features of spaces built from various spike-train distances. We investigate the efficacy of our method by simulating activity in three small artificial neural networks with different sets of parameters, giving rise to dynamics that can be classified into four regimes. We then compute three measures of spike train similarity and use persistent homology to extract topological features that are fundamentally different from those used in traditional methods. Our results show that a machine learning classifier trained on these features can accurately predict the regime of the network it was trained on and also generalize to other networks that were not presented during training. Moreover, we demonstrate that using features extracted from multiple spike-train distances systematically improves the performance of our method

    Cortical Spike Synchrony as a Measure of Input Familiarity

    Get PDF
    J.G.O. was supported by the Ministerio de Economia y Competividad and FEDER (Spain, project FIS2015-66503-C3-1-P) and the ICREA Academia programme. E.U. acknowledges support from the Scottish Universities Life Sciences Alliance (SULSA) and HPC-Europa2.Peer reviewedPostprin

    Weak pairwise correlations imply strongly correlated network states in a neural population

    Get PDF
    Biological networks have so many possible states that exhaustive sampling is impossible. Successful analysis thus depends on simplifying hypotheses, but experiments on many systems hint that complicated, higher order interactions among large groups of elements play an important role. In the vertebrate retina, we show that weak correlations between pairs of neurons coexist with strongly collective behavior in the responses of ten or more neurons. Surprisingly, we find that this collective behavior is described quantitatively by models that capture the observed pairwise correlations but assume no higher order interactions. These maximum entropy models are equivalent to Ising models, and predict that larger networks are completely dominated by correlation effects. This suggests that the neural code has associative or error-correcting properties, and we provide preliminary evidence for such behavior. As a first test for the generality of these ideas, we show that similar results are obtained from networks of cultured cortical neurons.Comment: Full account of work presented at the conference on Computational and Systems Neuroscience (COSYNE), 17-20 March 2005, in Salt Lake City, Utah (http://cosyne.org

    Simulation of networks of spiking neurons: A review of tools and strategies

    Full text link
    We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin-Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks.Comment: 49 pages, 24 figures, 1 table; review article, Journal of Computational Neuroscience, in press (2007

    Characterization and Compensation of Network-Level Anomalies in Mixed-Signal Neuromorphic Modeling Platforms

    Full text link
    Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks
    corecore