5 research outputs found

    Balanced neural architecture and the idling brain

    Get PDF
    A signature feature of cortical spike trains is their trial-to-trial variability. This variability is large in the spontaneous state and is reduced when cortex is driven by a stimulus or task. Models of recurrent cortical networks with unstructured, yet balanced, excitation and inhibition generate variability consistent with evoked conditions. However, these models produce spike trains which lack the long timescale fluctuations and large variability exhibited during spontaneous cortical dynamics. We propose that global network architectures which support a large number of stable states (attractor networks) allow balanced networks to capture key features of neural variability in both spontaneous and evoked conditions. We illustrate this using balanced spiking networks with clustered assembly, feedforward chain, and ring structures. By assuming that global network structure is related to stimulus preference, we show that signal correlations are related to the magnitude of correlations in the spontaneous state. Finally, we contrast the impact of stimulation on the trial-to-trial variability in attractor networks with that of strongly coupled spiking networks with chaotic firing rate instabilities, recently investigated by Ostojic (2014). We find that only attractor networks replicate an experimentally observed stimulus-induced quenching of trial-to-trial variability. In total, the comparison of the trial-variable dynamics of single neurons or neuron pairs during spontaneous and evoked activity can be a window into the global structure of balanced cortical networks. © 2014 Doiron and Litwin-Kumar

    From neuronal populations to behavior: a computational journey

    Get PDF
    Cognitive behaviors originate in the responses of neuronal populations. We have a reasonable understanding of how the activity of a single neuron can be related to a specific behavior. However, it is still unclear how more complex behaviors are inferred from the responses of neuronal populations. This is a particularly timely problem because multi-neuronal recording techniques have recently become increasingly available, simultaneously spurring advances in the analysis of neuronal population data. These developments are, however, constrained by the challenges of combining theoretical and experimental approaches because both approaches have their unique set of constraints. A solution to this problem is to design computational models that are either derived or inspired by cortical computations

    Neural assemblies as core elements for modeling neural networks in the brain

    Get PDF
    How does the brain process and memorize information? We all know that the neuron (also known as nerve cell) is the processing unit in the brain. But how do neurons work together in networks? The connectivity structure of neural networks plays an important role in information processing. Therefore, it is worthwhile to investigate modeling of neural networks. Experiments extract different kinds of datasets (ranging from pair-wise connectivity to membrane potential of individual neurons) and provide an understanding of neuronal activity. However, due to technical limitations of experiments, and complexity and variety of neural architectures, the experimental datasets do not yield a model of neural networks on their own. Roughly speaking, the experimental datasets are not enough for modeling neural networks. Therefore, in addition to these datasets, we have to utilize assumptions, hand-tuned features, parameter tuning and heuristic methods for modeling networks. In this thesis, we present different models of neural networks that are able to produce several behaviors observed in mammalian brain and cell cultures, e.g., up-state/down-state oscillations, different stimulus-evoked responses of cortical layers, activity propagation with tunable speed and several activity patterns of mice barrel cortex. An element which is embedded in all of these models is a network feature called neural assembly. A neural assembly is a group (also called population) of neurons with dense recurrent connectivity and strong internal synaptic weights. We study the dynamics of neural assemblies using analytical approaches and computer simulations. We show that network models containing assemblies exhibit dynamics similar to activity observed in the brain
    corecore