5,547 research outputs found

    Characterizing synaptic conductance fluctuations in cortical neurons and their influence on spike generation

    Full text link
    Cortical neurons are subject to sustained and irregular synaptic activity which causes important fluctuations of the membrane potential (Vm). We review here different methods to characterize this activity and its impact on spike generation. The simplified, fluctuating point-conductance model of synaptic activity provides the starting point of a variety of methods for the analysis of intracellular Vm recordings. In this model, the synaptic excitatory and inhibitory conductances are described by Gaussian-distributed stochastic variables, or colored conductance noise. The matching of experimentally recorded Vm distributions to an invertible theoretical expression derived from the model allows the extraction of parameters characterizing the synaptic conductance distributions. This analysis can be complemented by the matching of experimental Vm power spectral densities (PSDs) to a theoretical template, even though the unexpected scaling properties of experimental PSDs limit the precision of this latter approach. Building on this stochastic characterization of synaptic activity, we also propose methods to qualitatively and quantitatively evaluate spike-triggered averages of synaptic time-courses preceding spikes. This analysis points to an essential role for synaptic conductance variance in determining spike times. The presented methods are evaluated using controlled conductance injection in cortical neurons in vitro with the dynamic-clamp technique. We review their applications to the analysis of in vivo intracellular recordings in cat association cortex, which suggest a predominant role for inhibition in determining both sub- and supra-threshold dynamics of cortical neurons embedded in active networks.Comment: 9 figures, Journal of Neuroscience Methods (in press, 2008

    Transient Information Flow in a Network of Excitatory and Inhibitory Model Neurons: Role of Noise and Signal Autocorrelation

    Get PDF
    We investigate the performance of sparsely-connected networks of integrate-and-fire neurons for ultra-short term information processing. We exploit the fact that the population activity of networks with balanced excitation and inhibition can switch from an oscillatory firing regime to a state of asynchronous irregular firing or quiescence depending on the rate of external background spikes. We find that in terms of information buffering the network performs best for a moderate, non-zero, amount of noise. Analogous to the phenomenon of stochastic resonance the performance decreases for higher and lower noise levels. The optimal amount of noise corresponds to the transition zone between a quiescent state and a regime of stochastic dynamics. This provides a potential explanation on the role of non-oscillatory population activity in a simplified model of cortical micro-circuits.Comment: 27 pages, 7 figures, to appear in J. Physiology (Paris) Vol. 9

    Image informatics strategies for deciphering neuronal network connectivity

    Get PDF
    Brain function relies on an intricate network of highly dynamic neuronal connections that rewires dramatically under the impulse of various external cues and pathological conditions. Among the neuronal structures that show morphologi- cal plasticity are neurites, synapses, dendritic spines and even nuclei. This structural remodelling is directly connected with functional changes such as intercellular com- munication and the associated calcium-bursting behaviour. In vitro cultured neu- ronal networks are valuable models for studying these morpho-functional changes. Owing to the automation and standardisation of both image acquisition and image analysis, it has become possible to extract statistically relevant readout from such networks. Here, we focus on the current state-of-the-art in image informatics that enables quantitative microscopic interrogation of neuronal networks. We describe the major correlates of neuronal connectivity and present workflows for analysing them. Finally, we provide an outlook on the challenges that remain to be addressed, and discuss how imaging algorithms can be extended beyond in vitro imaging studies

    The Use of Features Extracted from Noisy Samples for Image Restoration Purposes

    Get PDF
    An important feature of neural networks is the ability they have to learn from their environment, and, through learning to improve performance in some sense. In the following we restrict the development to the problem of feature extracting unsupervised neural networks derived on the base of the biologically motivated Hebbian self-organizing principle which is conjectured to govern the natural neural assemblies and the classical principal component analysis (PCA) method used by statisticians for almost a century for multivariate data analysis and feature extraction. The research work reported in the paper aims to propose a new image reconstruction method based on the features extracted from the noise given by the principal components of the noise covariance matrix.feature extraction, PCA, Generalized Hebbian Algorithm, image restoration, wavelet transform, multiresolution support set

    Heterogeneous Mean Field for neural networks with short term plasticity

    Full text link
    We report about the main dynamical features of a model of leaky-integrate-and fire excitatory neurons with short term plasticity defined on random massive networks. We investigate the dynamics by a Heterogeneous Mean-Field formulation of the model, that is able to reproduce dynamical phases characterized by the presence of quasi-synchronous events. This formulation allows one to solve also the inverse problem of reconstructing the in-degree distribution for different network topologies from the knowledge of the global activity field. We study the robustness of this inversion procedure, by providing numerical evidence that the in-degree distribution can be recovered also in the presence of noise and disorder in the external currents. Finally, we discuss the validity of the heterogeneous mean-field approach for sparse networks, with a sufficiently large average in-degree

    Frequency dependence of signal power and spatial reach of the local field potential

    Get PDF
    The first recording of electrical potential from brain activity was reported already in 1875, but still the interpretation of the signal is debated. To take full advantage of the new generation of microelectrodes with hundreds or even thousands of electrode contacts, an accurate quantitative link between what is measured and the underlying neural circuit activity is needed. Here we address the question of how the observed frequency dependence of recorded local field potentials (LFPs) should be interpreted. By use of a well-established biophysical modeling scheme, combined with detailed reconstructed neuronal morphologies, we find that correlations in the synaptic inputs onto a population of pyramidal cells may significantly boost the low-frequency components of the generated LFP. We further find that these low-frequency components may be less `local' than the high-frequency LFP components in the sense that (1) the size of signal-generation region of the LFP recorded at an electrode is larger and (2) that the LFP generated by a synaptically activated population spreads further outside the population edge due to volume conduction

    The Physics of Living Neural Networks

    Full text link
    Improvements in technique in conjunction with an evolution of the theoretical and conceptual approach to neuronal networks provide a new perspective on living neurons in culture. Organization and connectivity are being measured quantitatively along with other physical quantities such as information, and are being related to function. In this review we first discuss some of these advances, which enable elucidation of structural aspects. We then discuss two recent experimental models that yield some conceptual simplicity. A one-dimensional network enables precise quantitative comparison to analytic models, for example of propagation and information transport. A two-dimensional percolating network gives quantitative information on connectivity of cultured neurons. The physical quantities that emerge as essential characteristics of the network in vitro are propagation speeds, synaptic transmission, information creation and capacity. Potential application to neuronal devices is discussed.Comment: PACS: 87.18.Sn, 87.19.La, 87.80.-y, 87.80.Xa, 64.60.Ak Keywords: complex systems, neuroscience, neural networks, transport of information, neural connectivity, percolation http://www.weizmann.ac.il/complex/tlusty/papers/PhysRep2007.pdf http://www.weizmann.ac.il/complex/EMoses/pdf/PhysRep-448-56.pd
    corecore