15 research outputs found

    Model-based analysis of stability in networks of neurons

    Get PDF
    Neurons, the building blocks of the brain, are an astonishingly capable type of cell. Collectively they can store, manipulate and retrieve biologically important information, allowing animals to learn and adapt to environmental changes. This universal adaptability is widely believed to be due to plasticity: the readiness of neurons to manipulate and adjust their intrinsic properties and strengths of connections to other cells. It is through such modifications that associations between neurons can be made, giving rise to memory representations; for example, linking a neuron responding to the smell of pancakes with neurons encoding sweet taste and general gustatory pleasure. However, this malleability inherent to neuronal cells poses a dilemma from the point of view of stability: how is the brain able to maintain stable operation while in the state of constant flux? First of all, won’t there occur purely technical problems akin to short-circuiting or runaway activity? And second of all, if the neurons are so easily plastic and changeable, how can they provide a reliable description of the environment? Of course, evidence abounds to testify to the robustness of brains, both from everyday experience and scientific experiments. How does this robustness come about? Firstly, many control feedback mechanisms are in place to ensure that neurons do not enter wild regimes of behaviour. These mechanisms are collectively known as homeostatic plasticity, since they ensure functional homeostasis through plastic changes. One well-known example is synaptic scaling, a type of plasticity ensuring that a single neuron does not get overexcited by its inputs: whenever learning occurs and connections between cells get strengthened, subsequently all the neurons’ inputs get downscaled to maintain a stable level of net incoming signals. And secondly, as hinted by other researchers and directly explored in this work, networks of neurons exhibit a property present in many complex systems called sloppiness. That is, they produce very similar behaviour under a wide range of parameters. This principle appears to operate on many scales and is highly useful (perhaps even unavoidable), as it permits for variation between individuals and for robustness to mutations and developmental perturbations: since there are many combinations of parameters resulting in similar operational behaviour, a disturbance of a single, or even several, parameters does not need to lead to dysfunction. It is also that same property that permits networks of neurons to flexibly reorganize and learn without becoming unstable. As an illustrative example, consider encountering maple syrup for the first time and associating it with pancakes; thanks to sloppiness, this new link can be added without causing the network to fire excessively. As has been found in previous experimental studies, consistent multi-neuron activity patterns arise across organisms, despite the interindividual differences in firing profiles of single cells and precise values of connection strengths. Such activity patterns, as has been furthermore shown, can be maintained despite pharmacological perturbation, as neurons compensate for the perturbed parameters by adjusting others; however, not all pharmacological perturbations can be thus amended. In the present work, it is for the first time directly demonstrated that groups of neurons are by rule sloppy; their collective parameter space is mapped to reveal which are the sensitive and insensitive parameter combinations; and it is shown that the majority of spontaneous fluctuations over time primarily affect the insensitive parameters. In order to demonstrate the above, hippocampal neurons of the rat were grown in culture over multi-electrode arrays and recorded from for several days. Subsequently, statistical models were fit to the activity patterns of groups of neurons to obtain a mathematically tractable description of their collective behaviour at each time point. These models provide robust fits to the data and allow for a principled sensitivity analysis with the use of information-theoretic tools. This analysis has revealed that groups of neurons tend to be governed by a few leader units. Furthermore, it appears that it was the stability of these key neurons and their connections that ensured the stability of collective firing patterns across time. The remaining units, in turn, were free to undergo plastic changes without risking destabilizing the collective behaviour. Together with what has been observed by other researchers, the findings of the present work suggest that the impressively adaptable yet robust functioning of the brain is made possible by the interplay of feedback control of few crucial properties of neurons and the general sloppy design of networks. It has, in fact, been hypothesised that any complex system subject to evolution is bound to rely on such design: in order to cope with natural selection under changing environmental circumstances, it would be difficult for a system to rely on tightly controlled parameters. It might be, therefore, that all life is just, by nature, sloppy..

    Sloppiness in spontaneously active neuronal networks

    Get PDF
    Various plasticity mechanisms, including experience-dependent, spontaneous, as well as homeostatic ones, continuously remodel neural circuits. Yet, despite fluctuations in the properties of single neurons and synapses, the behavior and function of neuronal assemblies are generally found to be very stable over time. This raises the important question of how plasticity is coordinated across the network. To address this, we investigated the stability of network activity in cultured rat hippocampal neurons recorded with high-density multielectrode arrays over several days. We used parametric models to characterize multineuron activity patterns and analyzed their sensitivity to changes. We found that the models exhibited sloppiness, a property where the model behavior is insensitive to changes in many parameter combinations, but very sensitive to a few. The activity of neurons with sloppy parameters showed faster and larger fluctuations than the activity of a small subset of neurons associated with sensitive parameters. Furthermore, parameter sensitivity was highly correlated with firing rates. Finally, we tested our observations from cell cultures on an in vivo recording from monkey visual cortex and we confirm that spontaneous cortical activity also shows hallmarks of sloppy behavior and firing rate dependence. Our findings suggest that a small subnetwork of highly active and stable neurons supports group stability, and that this endows neuronal networks with the flexibility to continuously remodel without compromising stability and function

    Spike Detection for Large Neural Populations Using High Density Multielectrode Arrays

    Get PDF
    An emerging generation of high-density microelectrode arrays (MEAs) is now capable of recording spiking activity simultaneously from thousands of neurons with closely spaced electrodes. Reliable spike detection and analysis in such recordings is challenging due to the large amount of raw data, and the dense sampling of spikes with closely spaced electrodes.Here, we present a highly efficient, online capable spike detection algorithm, and an offline method with improved detection rates, which enables estimation of spatial event locations at a resolution higher than that provided by the array by combining information from multiple electrodes. Data acquired with a 4,096 channel MEA from neuronal cultures and the neonatal retina, as well as synthetic data was used to test and validate these methods.We demonstrate that these algorithms outperform conventional methods due to a better noise estimate and an improved signal-to-noise ratio through combining information from multiple electrodes. Finally, we present a new approach for analyzing population activity based on the characterization of the spatio-temporal event profile, which does not require the isolation of single units.Overall, we show how the improved spatial resolution provided by high density, large scale microelectrode arrays can be reliably exploited to characterize activity from large neural populations and brain circuits

    Statistical Analysis of Sleep Spindle Occurrences

    Get PDF
    Spindles - a hallmark of stage II sleep - are a transient oscillatory phenomenon in the EEG believed to reflect thalamocortical activity contributing to unresponsiveness during sleep. Currently spindles are often classified into two classes: fast spindles, with a frequency of around 14 Hz, occurring in the centro-parietal region; and slow spindles, with a frequency of around 12 Hz, prevalent in the frontal region. Here we aim to establish whether the spindle generation process also exhibits spatial heterogeneity. Electroencephalographic recordings from 20 subjects were automatically scanned to detect spindles and the time occurrences of spindles were used for statistical analysis. Gamma distribution parameters were fit to each inter-spindle interval distribution, and a modified Wald-Wolfowitz lag-1 correlation test was applied. Results indicate that not all spindles are generated by the same statistical process, but this dissociation is not spindle-type specific. Although this dissociation is not topographically specific, a single generator for all spindle types appears unlikely

    Example overview of statistical spindle properties.

    No full text
    <p>Statistical properties of spindles in subject 3: in grey, histograms of inter-spindle intervals for four scalp locations (<b>(A)</b> Fpz, <b>(B)</b> Fz, <b>(C)</b> Cz, and <b>(D)</b> Pz; inter-spindle interval taken as center-to-center separation in time of automatically detected spindles of stage 2 sleep; bin width of histograms: 1s); superimposed in solid lines, gamma distributions fit to the distribution of inter-spindle intervals. In all four locations, the maximum likelihood distribution fit yields a gamma distribution with shape parameter close to one, suggesting a Poisson process.</p

    Example of gamma fitting evaluation.

    No full text
    <p>Quality assessment of gamma distribution fits for subject 3 (the fits from Fig. 2) across locations (<b>(A)</b> Fpz, <b>(B)</b> Fz, <b>(C)</b> Cz, and <b>(D)</b> Pz): Kolmogorov-Smirnov plots for each of the four fits of gamma distribution to the spindle interval distribution; dotted lines represent 95% confidence bounds. In all four panels the KS plots lie entirely within the confidence bounds pointing to statistically acceptable agreement between the model and the data.</p
    corecore