1,081 research outputs found

    Spiking Dynamics during Perceptual Grouping in the Laminar Circuits of Visual Cortex

    Full text link
    Grouping of collinear boundary contours is a fundamental process during visual perception. Illusory contour completion vividly illustrates how stable perceptual boundaries interpolate between pairs of contour inducers, but do not extrapolate from a single inducer. Neural models have simulated how perceptual grouping occurs in laminar visual cortical circuits. These models predicted the existence of grouping cells that obey a bipole property whereby grouping can occur inwardly between pairs or greater numbers of similarly oriented and co-axial inducers, but not outwardly from individual inducers. These models have not, however, incorporated spiking dynamics. Perceptual grouping is a challenge for spiking cells because its properties of collinear facilitation and analog sensitivity to inducer configurations occur despite irregularities in spike timing across all the interacting cells. Other models have demonstrated spiking dynamics in laminar neocortical circuits, but not how perceptual grouping occurs. The current model begins to unify these two modeling streams by implementing a laminar cortical network of spiking cells whose intracellular temporal dynamics interact with recurrent intercellular spiking interactions to quantitatively simulate data from neurophysiological experiments about perceptual grouping, the structure of non-classical visual receptive fields, and gamma oscillations.CELEST, an NSF Science of Learning Center (SBE-0354378); SyNAPSE program of the Defense Advanced Research Project Agency (HR001109-03-0001); Defense Advanced Research Project Agency (HR001-09-C-0011

    Computational study of resting state network dynamics

    Get PDF
    Lo scopo di questa tesi è quello di mostrare, attraverso una simulazione con il software The Virtual Brain, le più importanti proprietà della dinamica cerebrale durante il resting state, ovvero quando non si è coinvolti in nessun compito preciso e non si è sottoposti a nessuno stimolo particolare. Si comincia con lo spiegare cos’è il resting state attraverso una breve revisione storica della sua scoperta, quindi si passano in rassegna alcuni metodi sperimentali utilizzati nell’analisi dell’attività cerebrale, per poi evidenziare la differenza tra connettività strutturale e funzionale. In seguito, si riassumono brevemente i concetti dei sistemi dinamici, teoria indispensabile per capire un sistema complesso come il cervello. Nel capitolo successivo, attraverso un approccio ‘bottom-up’, si illustrano sotto il profilo biologico le principali strutture del sistema nervoso, dal neurone alla corteccia cerebrale. Tutto ciò viene spiegato anche dal punto di vista dei sistemi dinamici, illustrando il pionieristico modello di Hodgkin-Huxley e poi il concetto di dinamica di popolazione. Dopo questa prima parte preliminare si entra nel dettaglio della simulazione. Prima di tutto si danno maggiori informazioni sul software The Virtual Brain, si definisce il modello di network del resting state utilizzato nella simulazione e si descrive il ‘connettoma’ adoperato. Successivamente vengono mostrati i risultati dell’analisi svolta sui dati ricavati, dai quali si mostra come la criticità e il rumore svolgano un ruolo chiave nell'emergenza di questa attività di fondo del cervello. Questi risultati vengono poi confrontati con le più importanti e recenti ricerche in questo ambito, le quali confermano i risultati del nostro lavoro. Infine, si riportano brevemente le conseguenze che porterebbe in campo medico e clinico una piena comprensione del fenomeno del resting state e la possibilità di virtualizzare l’attività cerebrale

    Neuronal Synchronization Can Control the Energy Efficiency of Inter-Spike Interval Coding

    Get PDF
    The role of synchronous firing in sensory coding and cognition remains controversial. While studies, focusing on its mechanistic consequences in attentional tasks, suggest that synchronization dynamically boosts sensory processing, others failed to find significant synchronization levels in such tasks. We attempt to understand both lines of evidence within a coherent theoretical framework. We conceptualize synchronization as an independent control parameter to study how the postsynaptic neuron transmits the average firing activity of a presynaptic population, in the presence of synchronization. We apply the Berger-Levy theory of energy efficient information transmission to interpret simulations of a Hodgkin-Huxley-type postsynaptic neuron model, where we varied the firing rate and synchronization level in the presynaptic population independently. We find that for a fixed presynaptic firing rate the simulated postsynaptic interspike interval distribution depends on the synchronization level and is well-described by a generalized extreme value distribution. For synchronization levels of 15% to 50%, we find that the optimal distribution of presynaptic firing rate, maximizing the mutual information per unit cost, is maximized at ~30% synchronization level. These results suggest that the statistics and energy efficiency of neuronal communication channels, through which the input rate is communicated, can be dynamically adapted by the synchronization level.Comment: 47 pages, 14 figures, 2 Table

    Attentional modulation of firing rate and synchrony in a model cortical network

    Get PDF
    When attention is directed into the receptive field of a V4 neuron, its contrast response curve is shifted to lower contrast values (Reynolds et al, 2000, Neuron 26:703). Attention also increases the coherence between neurons responding to the same stimulus (Fries et al, 2001, Science 291:1560). We studied how the firing rate and synchrony of a densely interconnected cortical network varied with contrast and how they were modulated by attention. We found that an increased driving current to the excitatory neurons increased the overall firing rate of the network, whereas variation of the driving current to inhibitory neurons modulated the synchrony of the network. We explain the synchrony modulation in terms of a locking phenomenon during which the ratio of excitatory to inhibitory firing rates is approximately constant for a range of driving current values. We explored the hypothesis that contrast is represented primarily as a drive to the excitatory neurons, whereas attention corresponds to a reduction in driving current to the inhibitory neurons. Using this hypothesis, the model reproduces the following experimental observations: (1) the firing rate of the excitatory neurons increases with contrast; (2) for high contrast stimuli, the firing rate saturates and the network synchronizes; (3) attention shifts the contrast response curve to lower contrast values; (4) attention leads to stronger synchronization that starts at a lower value of the contrast compared with the attend-away condition. In addition, it predicts that attention increases the delay between the inhibitory and excitatory synchronous volleys produced by the network, allowing the stimulus to recruit more downstream neurons.Comment: 36 pages, submitted to Journal of Computational Neuroscienc

    Limits and dynamics of randomly connected neuronal networks

    Full text link
    Networks of the brain are composed of a very large number of neurons connected through a random graph and interacting after random delays that both depend on the anatomical distance between cells. In order to comprehend the role of these random architectures on the dynamics of such networks, we analyze the mesoscopic and macroscopic limits of networks with random correlated connectivity weights and delays. We address both averaged and quenched limits, and show propagation of chaos and convergence to a complex integral McKean-Vlasov equations with distributed delays. We then instantiate a completely solvable model illustrating the role of such random architectures in the emerging macroscopic activity. We particularly focus on the role of connectivity levels in the emergence of periodic solutions

    A neuro-inspired system for online learning and recognition of parallel spike trains, based on spike latency and heterosynaptic STDP

    Full text link
    Humans perform remarkably well in many cognitive tasks including pattern recognition. However, the neuronal mechanisms underlying this process are not well understood. Nevertheless, artificial neural networks, inspired in brain circuits, have been designed and used to tackle spatio-temporal pattern recognition tasks. In this paper we present a multineuronal spike pattern detection structure able to autonomously implement online learning and recognition of parallel spike sequences (i.e., sequences of pulses belonging to different neurons/neural ensembles). The operating principle of this structure is based on two spiking/synaptic neurocomputational characteristics: spike latency, that enables neurons to fire spikes with a certain delay and heterosynaptic plasticity, that allows the own regulation of synaptic weights. From the perspective of the information representation, the structure allows mapping a spatio-temporal stimulus into a multidimensional, temporal, feature space. In this space, the parameter coordinate and the time at which a neuron fires represent one specific feature. In this sense, each feature can be considered to span a single temporal axis. We applied our proposed scheme to experimental data obtained from a motor inhibitory cognitive task. The test exhibits good classification performance, indicating the adequateness of our approach. In addition to its effectiveness, its simplicity and low computational cost suggest a large scale implementation for real time recognition applications in several areas, such as brain computer interface, personal biometrics authentication or early detection of diseases.Comment: Submitted to Frontiers in Neuroscienc

    Spontaneous Local Gamma Oscillation Selectively Enhances Neural Network Responsiveness

    Get PDF
    Synchronized oscillation is very commonly observed in many neuronal systems and might play an important role in the response properties of the system. We have studied how the spontaneous oscillatory activity affects the responsiveness of a neuronal network, using a neural network model of the visual cortex built from Hodgkin-Huxley type excitatory (E-) and inhibitory (I-) neurons. When the isotropic local E-I and I-E synaptic connections were sufficiently strong, the network commonly generated gamma frequency oscillatory firing patterns in response to random feed-forward (FF) input spikes. This spontaneous oscillatory network activity injects a periodic local current that could amplify a weak synaptic input and enhance the network's responsiveness. When E-E connections were added, we found that the strength of oscillation can be modulated by varying the FF input strength without any changes in single neuron properties or interneuron connectivity. The response modulation is proportional to the oscillation strength, which leads to self-regulation such that the cortical network selectively amplifies various FF inputs according to its strength, without requiring any adaptation mechanism. We show that this selective cortical amplification is controlled by E-E cell interactions. We also found that this response amplification is spatially localized, which suggests that the responsiveness modulation may also be spatially selective. This suggests a generalized mechanism by which neural oscillatory activity can enhance the selectivity of a neural network to FF inputs

    Fast Synchronization of Perpetual Grouping in Laminar Visual Cortical Circuits

    Full text link
    Perceptual grouping is well-known to be a fundamental process during visual perception, notably grouping across scenic regions that do not receive contrastive visual inputs. Illusory contours are a classical example of such groupings. Recent psychophysical and neurophysiological evidence have shown that the grouping process can facilitate rapid synchronization of the cells that are bound together by a grouping, even when the grouping must be completed across regions that receive no contrastive inputs. Synchronous grouping can hereby bind together different object parts that may have become desynchronized due to a variety of factors, and can enhance the efficiency of cortical transmission. Neural models of perceptual grouping have clarified how such fast synchronization may occur by using bipole grouping cells, whose predicted properties have been supported by psychophysical, anatomical, and neurophysiological experiments. These models have not, however, incorporated some of the realistic constraints on which groupings in the brain are conditioned, notably the measured spatial extent of long-range interactions in layer 2/3 of a grouping network, and realistic synaptic and axonal signaling delays within and across cells in different cortical layers. This work addresses the question: Can long-range interactions that obey the bipole constraint achieve fast synchronization under realistic anatomical and neurophysiological constraints that initially desynchronize grouping signals? Can the cells that synchronize retain their analog sensitivity to changing input amplitudes? Can the grouping process complete and synchronize illusory contours across gaps in bottom-up inputs? Our simulations show that the answer to these questions is Yes.Office of Naval Research (N00014-01-1-0624); Air Force Office of Scientific Research (F49620-01-1-03097

    Relating macroscopic measures of brain activity to fast dynamic neuronal interactions

    Get PDF
    The aim of this thesis was to find a systematic relationship between neuronal synchrony and firing rates, that would enable us to make inferences about one given knowledge of the other. Functional neuroimaging techniques, such as functional magnetic resonance imaging (fMRI), are sensitive to changes in overall population synaptic activity, that can be interpreted in terms of rate coding for a particular stimulus or task. Characterising the relationship between synchrony and firing rates would facilitate inferences about fast neuronal interactions on the basis of macroscopic measures such as those obtained by fMRI. In this thesis, we used computer simulations of neuronal networks and fMRI in humans to investigate the relationship between mean synaptic activity and fast synchronous neuronal interactions. We found that the extent to which different neurons engage in fast dynamic interactions is largely dependent on the neuronal population firing rates and vice versa, i.e. as one metric changes (either activity or synchrony), so does the other. Additionally, as a result of the strong coupling between overall activity and neuronal synchrony, there is also a robust relationship between background activity and stimulus-evoked activity: Increased background activity increases the gain of the neurons, by decreasing effective membrane time constants, and enhancing stimulus-evoked population activity through the selection of fast synchronous dynamics. In concluding this thesis, we tested and confirmed, with fMRI in humans, that this mechanism may account for attentional modulation, i.e. the change in baseline neuronal firing rates associated with attention, in cell assemblies selectively responding to an attended sensory attribute, enhances responses elicited by presentation of that attribute
    • …
    corecore