79 research outputs found

    Development of statistical and computational methods to estimate functional connectivity and topology in large-scale neuronal assemblies

    Get PDF
    One of the most fundamental features of a neural circuit is its connectivity since the single neuron activity is not due only to its intrinsic properties but especially to the direct or indirect influence of other neurons1. It is fundamental to elaborate research strategies aimed at a comprehensive structural description of neuronal interconnections as well as the networks\u2019 elements forming the human connectome. The connectome will significantly increase our understanding of how functional brain states emerge from their underlying structural substrate, and will provide new mechanistic insights into how brain function is affected if this structural substrate is disrupted. The connectome is characterized by three different types of connectivity: structural, functional and effective connectivity. It is evident that the final goal of a connectivity analysis is the reconstruction of the human connectome, thus, the application of statistical measures to the in vivo model in both physiological and pathological states. Since the system under study (i.e. brain areas, cell assemblies) is highly complex, to achieve the purpose described above, it is useful to adopt a reductionist approach. During my PhD work, I focused on a reduced and simplified model, represented by neural networks chronically coupled to Micro Electrodes Arrays (MEAs). Large networks of cortical neurons developing in vitro and chronically coupled to MEAs2 represent a well-established experimental model for studying the neuronal dynamics at the network level3, and for understanding the basic principles of information coding4 learning and memory5. Thus, during my PhD work, I developed and optimized statistical methods to infer functional connectivity from spike train data. In particular, I worked on correlation-based methods: cross-correlation and partial correlation, and information-theory based methods: Transfer Entropy (TE) and Joint Entropy (JE). More in detail, my PhD\u2019s aim has been applying functional connectivity methods to neural networks coupled to high density resolution system, like the 3Brain active pixel sensor array with 4096 electrodes6. To fulfill such an aim, I re-adapted the computational logic operations of the aforementioned connectivity methods. Moreover, I worked on a new method based on the cross-correlogram, able to detect both inhibitory and excitatory links. I called such an algorithm Filtered Normalized Cross-Correlation Histogram (FNCCH). The FNCCH shows a very high precision in detecting both inhibitory and excitatory functional links when applied to our developed in silico model. I worked also on a temporal and pattern extension of the TE algorithm. In this way, I developed a Delayed TE (DTE) and a Delayed High Order TE (DHOTE) version of the TE algorithm. These two extension of the TE algorithm are able to consider different temporal bins at different temporal delays for the pattern recognition with respect to the basic TE. I worked also on algorithm for the JE computation. Starting from the mathematical definition in7, I developed a customized version of JE capable to detect the delay associated to a functional link, together with a dedicated shuffling based thresholding approach. Finally, I embedded all of these connectivity methods into a user-friendly open source software named SPICODYN8. SPICODYN allows the user to perform a complete analysis on data acquired from any acquisition system. I used a standard format for the input data, providing the user with the possibility to perform a complete set of operations on the input data, including: raw data viewing, spike and burst detection and analysis, functional connectivity analysis, graph theory and topological analysis. SPICODYN inherits the backbone structure from TOOLCONNECT, a previously published software that allowed to perform a functional connectivity analysis on spike trains dat

    Identification of excitatory-inhibitory links and network topology in large-scale neuronal assemblies from multi-electrode recordings

    Get PDF
    Functional-effective connectivity and network topology are nowadays key issues for studying brain physiological functions and pathologies. Inferring neuronal connectivity from electrophysiological recordings presents open challenges and unsolved problems. In this work, we present a cross-correlation based method for reliably estimating not only excitatory but also inhibitory links, by analyzing multi-unit spike activity from large-scale neuronal networks. The method is validated by means of realistic simulations of large-scale neuronal populations. New results related to functional connectivity estimation and network topology identification obtained by experimental electrophysiological recordings from high-density and large-scale (i.e., 4096 electrodes) microtransducer arrays coupled to in vitro neural populations are presented. Specifically, we show that: (i) functional inhibitory connections are accurately identified in in vitro cortical networks, providing that a reasonable firing rate and recording length are achieved; (ii) small-world topology, with scale-free and rich-club features are reliably obtained, on condition that a minimum number of active recording sites are available. The method and procedure can be directly extended and applied to in vivo multi-units brain activity recordings

    Connectivity Influences on Nonlinear Dynamics in Weakly-Synchronized Networks: Insights from Rössler Systems, Electronic Chaotic Oscillators, Model and Biological Neurons

    Get PDF
    Natural and engineered networks, such as interconnected neurons, ecological and social networks, coupled oscillators, wireless terminals and power loads, are characterized by an appreciable heterogeneity in the local connectivity around each node. For instance, in both elementary structures such as stars and complex graphs having scale-free topology, a minority of elements are linked to the rest of the network disproportionately strongly. While the effect of the arrangement of structural connections on the emergent synchronization pattern has been studied extensively, considerably less is known about its influence on the temporal dynamics unfolding within each node. Here, we present a comprehensive investigation across diverse simulated and experimental systems, encompassing star and complex networks of Rössler systems, coupled hysteresis-based electronic oscillators, microcircuits of leaky integrate-and-fire model neurons, and finally recordings from in-vitro cultures of spontaneously-growing neuronal networks. We systematically consider a range of dynamical measures, including the correlation dimension, nonlinear prediction error, permutation entropy, and other information-theoretical indices. The empirical evidence gathered reveals that under situations of weak synchronization, wherein rather than a collective behavior one observes significantly differentiated dynamics, denser connectivity tends to locally promote the emergence of stronger signatures of nonlinear dynamics. In deterministic systems, transition to chaos and generation of higher-dimensional signals were observed; however, when the coupling is stronger, this relationship may be lost or even inverted. In systems with a strong stochastic component, the generation of more temporally-organized activity could be induced. These observations have many potential implications across diverse fields of basic and applied science, for example, in the design of distributed sensing systems based on wireless coupled oscillators, in network identification and control, as well as in the interpretation of neuroscientific and other dynamical data

    Methods to Enhance Information Extraction from Microelectrode Array Measurements of Neuronal Networks

    Get PDF
    For the last couple of decades, hand-in-hand progresses in stem-cell technologies and culturing neuronal cells together with advances in microelectrode array (MEA) technology have enabled more efficient biological models. Thus, understanding the neuronal behavior in general or realizing some particular disease model by studying neuronal responses to pharmacological or neurotoxical assays has become more achievable.Moreover, with the widespread practical usage of MEA technology, a vast amount of new types of data has been collected to be analyzed. Conventionally, MEA data from neuronal networks have been analyzed, e.g., with the methods using predeïŹned parameters and suitable for analyzing only speciïŹc neuronal behaviors or by considering only a portion of the data such as extracted extracellular action potentials (EAPs). Therefore, in addition to the current analysis methods, novel methods and newly acquired measures are needed to understand the new models. In fact, we hypothesized that existing measurement data carry a lot more information than is considered at present. In this thesis, we proposed novel methods and measures to increase the information which we can extract from MEA recordings; thus, we hope these to contribute to better understanding of neuronal behaviors and interactions.Firstly, to analyze ïŹring properties of neuronal ensembles, we developed a method which identiïŹes bursts based on spiking behavior of recordings; thus, the method is feasible for the cultures with variable ïŹring dynamics. The developed method was also designed to process a large amount of data automatically for statistical justiïŹcation. Therefore, we increased the analysis power in the subsequent analyses in comparison to the existing burst detection methods which are using pre-deïŹned and strict deïŹnitions.Subsequently, we proposed novel metrics to evaluate and quantify the information content of the bursts. Entropy-based measures were employed for quantifying bursts according to their self-similarity and spectral uniformity. We showed that different types of bursts can be distinguished using entropy-based measures. Also, the joint analysis of bursts and action potential waveforms were proposed to obtain a novel type of information, i.e., spike type compositions of bursts. We presented that the spike type compositions of bursts would change under different pharmacological applications.In addition, we developed a novel method to calculate synchronization between neuronal ensembles by evaluating their time variant spectral distributions: For that, we assessed correlations of the spectral entropy (CorSE). We showed that CorSE was able to estimate synchronicity by studying both local ïŹeld potentials (LFPs) and extracellular action potentials (EAPs); thus, we could contribute to understanding synchronicity between neuronal ensembles which also don’t exhibit detectable EAPs.In conclusion, motivated by the recent popularity of MEA usage in the neuroscience ïŹeld, we developed novel and enhanced methods to derive new types of information. We showed that by using our developed methods one could extract additional information from MEA recordings. As a result, the proposed methods and metrics would enhance the analysis efficiency of the microelectrode array measurement based studies and provide different viewpoints for the analyses. The derived novel information would contribute to interpreting neuronal signals recorded from a single or multiple recording locations. Consequently, methods presented in this thesis are important complements to the existing methods to understand neuronal behavior and population-wise neuronal interactions

    In vitro neuronal cultures on MEA: an engineering approach to study physiological and pathological brain networks

    Get PDF
    Reti neuronali accoppiate a matrici di microelettrodi: un metodo ingegneristico per studiare reti cerebrali in situazioni fisiologiche e patologich

    Evaluation of the Performance of Information Theory-Based Methods and Cross-Correlation to Estimate the Functional Connectivity in Cortical Networks

    Get PDF
    Functional connectivity of in vitro neuronal networks was estimated by applying different statistical algorithms on data collected by Micro-Electrode Arrays (MEAs). First we tested these “connectivity methods” on neuronal network models at an increasing level of complexity and evaluated the performance in terms of ROC (Receiver Operating Characteristic) and PPC (Positive Precision Curve), a new defined complementary method specifically developed for functional links identification. Then, the algorithms better estimated the actual connectivity of the network models, were used to extract functional connectivity from cultured cortical networks coupled to MEAs. Among the proposed approaches, Transfer Entropy and Joint-Entropy showed the best results suggesting those methods as good candidates to extract functional links in actual neuronal networks from multi-site recordings

    Investigating Information Flows in Spiking Neural Networks With High Fidelity

    Get PDF
    The brains of many organisms are capable of a wide variety of complex computations. This capability must be undergirded by a more general purpose computational capacity. The exact nature of this capacity, how it is distributed across the brains of organisms and how it arises throughout the course of development is an open topic of scientific investigation. Individual neurons are widely considered to be the fundamental computational units of brains. Moreover, the finest scale at which large scale recordings of brain activity can be performed is the spiking activity of neurons and our ability to perform these recordings over large numbers of neurons and with fine spatial resolution is increasing rapidly. This makes the spiking activity of individual neurons a highly attractive data modality on which to study neural computation. The framework of information dynamics has proven to be a successful approach towards interrogating the capacity for general purpose computation. It does this by revealing the atomic information processing operations of information storage, transfer and modification. Unfortunately, the study of information flows and other information processing operations from the spiking activity of neurons has been severely hindered by the lack of effective tools for estimating these quantities on this data modality. This thesis remedies this situation by presenting an estimator for information flows, as measured by Transfer Entropy (TE), that operates in continuous time on event-based data such as spike trains. Unlike the previous approach to the estimation of this quantity, which discretised the process into time bins, this estimator operates on the raw inter-spike intervals. It is demonstrated to be far superior to the previous discrete-time approach in terms of consistency, rate of convergence and bias. Most importantly, unlike the discrete-time approach, which requires a hard tradeoff between capturing fine temporal precision or history effects occurring over reasonable time intervals, this estimator can capture history effects occurring over relatively large intervals without any loss of temporal precision. This estimator is applied to developing dissociated cultures of cortical rat neurons, therefore providing the first high-fidelity study of information flows on spiking data. It is found that the spatial structure of the flows locks in to a significant extent. at the point of their emergence and that certain nodes occupy specialised computational roles as either transmitters, receivers or mediators of information flow. Moreover, these roles are also found to lock in early. In order to fully understand the structure of neural information flows, however, we are required to go beyond pairwise interactions, and indeed multivariate information flows have become an important tool in the inference of effective networks from neuroscience data. These are directed networks where each node is connected to a minimal set of sources which maximally reduce the uncertainty in its present state. However, the application of multivariate information flows to the inference of effective networks from spiking data has been hampered by the above-mentioned issues with preexisting estimation techniques. Here, a greedy algorithm which iteratively builds a set of parents for each target node using multivariate transfer entropies, and which has already been well validated in the context of traditional discretely sampled time series, is adapted to use in conjunction with the newly developed estimator for event-based data. The combination of the greedy algorithm and continuous-time estimator is then validated on simulated examples for which the ground truth is known. The new capabilities in the estimation of information flows and the inference of effective networks on event-based data presented in this work represent a very substantial step forward in our ability to perform these analyses on the ever growing set of high resolution, large scale recordings of interacting neurons. As such, this work promises to enable substantial quantitative insights in the future regarding how neurons interact, how they process information, and how this changes under different conditions such as disease

    Model-based analysis of stability in networks of neurons

    Get PDF
    Neurons, the building blocks of the brain, are an astonishingly capable type of cell. Collectively they can store, manipulate and retrieve biologically important information, allowing animals to learn and adapt to environmental changes. This universal adaptability is widely believed to be due to plasticity: the readiness of neurons to manipulate and adjust their intrinsic properties and strengths of connections to other cells. It is through such modifications that associations between neurons can be made, giving rise to memory representations; for example, linking a neuron responding to the smell of pancakes with neurons encoding sweet taste and general gustatory pleasure. However, this malleability inherent to neuronal cells poses a dilemma from the point of view of stability: how is the brain able to maintain stable operation while in the state of constant flux? First of all, won’t there occur purely technical problems akin to short-circuiting or runaway activity? And second of all, if the neurons are so easily plastic and changeable, how can they provide a reliable description of the environment? Of course, evidence abounds to testify to the robustness of brains, both from everyday experience and scientific experiments. How does this robustness come about? Firstly, many control feedback mechanisms are in place to ensure that neurons do not enter wild regimes of behaviour. These mechanisms are collectively known as homeostatic plasticity, since they ensure functional homeostasis through plastic changes. One well-known example is synaptic scaling, a type of plasticity ensuring that a single neuron does not get overexcited by its inputs: whenever learning occurs and connections between cells get strengthened, subsequently all the neurons’ inputs get downscaled to maintain a stable level of net incoming signals. And secondly, as hinted by other researchers and directly explored in this work, networks of neurons exhibit a property present in many complex systems called sloppiness. That is, they produce very similar behaviour under a wide range of parameters. This principle appears to operate on many scales and is highly useful (perhaps even unavoidable), as it permits for variation between individuals and for robustness to mutations and developmental perturbations: since there are many combinations of parameters resulting in similar operational behaviour, a disturbance of a single, or even several, parameters does not need to lead to dysfunction. It is also that same property that permits networks of neurons to flexibly reorganize and learn without becoming unstable. As an illustrative example, consider encountering maple syrup for the first time and associating it with pancakes; thanks to sloppiness, this new link can be added without causing the network to fire excessively. As has been found in previous experimental studies, consistent multi-neuron activity patterns arise across organisms, despite the interindividual differences in firing profiles of single cells and precise values of connection strengths. Such activity patterns, as has been furthermore shown, can be maintained despite pharmacological perturbation, as neurons compensate for the perturbed parameters by adjusting others; however, not all pharmacological perturbations can be thus amended. In the present work, it is for the first time directly demonstrated that groups of neurons are by rule sloppy; their collective parameter space is mapped to reveal which are the sensitive and insensitive parameter combinations; and it is shown that the majority of spontaneous fluctuations over time primarily affect the insensitive parameters. In order to demonstrate the above, hippocampal neurons of the rat were grown in culture over multi-electrode arrays and recorded from for several days. Subsequently, statistical models were fit to the activity patterns of groups of neurons to obtain a mathematically tractable description of their collective behaviour at each time point. These models provide robust fits to the data and allow for a principled sensitivity analysis with the use of information-theoretic tools. This analysis has revealed that groups of neurons tend to be governed by a few leader units. Furthermore, it appears that it was the stability of these key neurons and their connections that ensured the stability of collective firing patterns across time. The remaining units, in turn, were free to undergo plastic changes without risking destabilizing the collective behaviour. Together with what has been observed by other researchers, the findings of the present work suggest that the impressively adaptable yet robust functioning of the brain is made possible by the interplay of feedback control of few crucial properties of neurons and the general sloppy design of networks. It has, in fact, been hypothesised that any complex system subject to evolution is bound to rely on such design: in order to cope with natural selection under changing environmental circumstances, it would be difficult for a system to rely on tightly controlled parameters. It might be, therefore, that all life is just, by nature, sloppy..
    • 

    corecore